Linux Basics: How to Download Files on the Shell With Wget

Wget is a popular and easy to use command line tool that is primarily used for non-interactive downloading files from the web. wget helps users to download huge chunks of data, multiple files and to do recursive downloads. It supports the download protocols (HTTP, HTTPS, FTP and, FTPS). The following article explains the basic wget command syntax and shows examples for popular use cases of wget.

1.1 Wget - An Overview

The wget command can be called with options, these are optional, and the URL which is always required.

wget [option] [URL]

1.2 Good to know

Wget is able to display the following information when a download is in progress:

  • Download progress (in percentage form)
  • Data quantity downloaded
  • Download Speed
  • Remaining time for the completion of the download process

Below you can find several examples of download scenarios users may be dealing with when downloading files on the Linux shell using wget:

1.3 Basic-Downloading One File

This is the most elementary case where users execute the wget command without any option by simply using the URL of the file to be downloaded in the command line. The following command example shows this:

wget [URL]

1.4 Download and Save the File using a Different Name

This step is simply an extension of the previous one and may be required when you wish to assign a different name to the file saved on the local hard disk. All you need to do is to add the -O option followed by the preferred file name:

wget -O [Preferred_Name] [URL]

Using the above command, you will be able to save the file using the name you wish to assign it.

1.5 Limiting the Speed of the Download

Normally, wget would eat up a significant bandwidth for downloading files from the web. But there is an option to restrict the speed of the download to a certain assigned value by customizing the basic wget command together with the "--limit-rate" option, by using the following command:

wget --limit-rate=[VALUE] [URL]

By specifying the preferred speed in the field "VALUE" in the above command, you would be able to customize the download speed as per your requirements. Add a suffix "k" for kilobytes or "m" for megabytes. e.g. "--limit-rate = 2m" to limit the max download speed to 2Mbyte/sec.

1.6 Resuming a Stopped/Interrupted Download

In case you face a download interruption after starting the download of a huge file from the web using wget, you will be absolutely delighted to know that the command given below can help you to resume the download process from where it stopped (without having to download the while file again!): All you need to do is execute the wget command with the option "-c".

wget -c [URL]

The above command will resume the download process from where it stopped earlier (when the download server supports it), thus letting you download the entire file in a seamless fashion.

1.7 Continuing the Download Process in the Background

When downloading a huge file, you may prefer to continue download process in the background and make use of the shell prompt while the file get's downloaded. In this case, you must execute the wget command using option -b option, and monitor the download status in the wget-log file, where the download process will get logged. You need to use the following command to start the download process in the background:

wget -b [URL]

You may check the download progress by accessing the content of the wget-log file using the tail command as follows:

tail -f wget-log

The above set of commands will help you use the shell prompt while a large file gets downloaded in the background and also keep an eye on the download progress.

1.8 Customizing the Number of Attempts (Increasing/Decreasing)

In the default case, the wget command would make up to 20 attempts to connect to the given website for completing the download in the event of lost/disrupted internet connectivity. However, users have the privilege to change this number as per their preference, by using the "--tries" option. The following command shall help you do exactly that:

wget --tries=[DESIRED_VALUE] [URL]

By specifying the preferred number in the DESIRED_VALUE field, you can set the number of retries in case of interrupted connectivity.

1.9 Reading a File for Multiple Downloads

If you wish to download multiple files, you need to prepare a text file containing the list of URLs pertaining to all the files that need to be downloaded. You can get wget to read the text file by using option -i of the command (given below), and begin the intended multiple downloads. Please use the following command for the same:

wget -i [TEXT-FILE-NAME]

The above command shall facilitate downloading of multiple files in a hassle-free manner.

1.10 Downloading a Complete Website

If you wish to retain a copy of any website that you may like to refer to/read locally, or maybe save a copy of your blog to the hard disk as back up, you may execute the wget command with the mirror option, as follows:

wget --mirror [Website Name]

The above command shall help you to mirror the desired website/save data locally for future reference.

1.11. Rejection of Specific File Types

Sometimes you might wish to download an entire website except files of a particular type, for example, videos/images. You may make use of the reject option with the wget command (given below):

wget --reject=[FILE-TYPE] [URL]

The above command enables you to reject the specified file types while downloading a website in its entirety.

1.12. FTP Downloads

The FTP Downloads may be of two types:

1. Anonymous FTP Download 2. Authenticated FTP Download

Consequently, there is a unique command for downloading each type.

For Anonymous FTP downloading, please use the following command:

wget [FTP-URL]

For Authenticated FTP Download, please use the following command:

wget --ftp-user=[USERNAME] --ftp-password=[PASSWORD] [URL]

Each of the above commands shall lead to the required FTP download.

Share this page:

1 Comment(s)