3

I have a text file

Google.com
Youtube.com
Gmail.com
Yahoo.com

I am trying to open them with Lynx. Here is the thesis.

I will store all of those links at a txt file, say links.txt and then I need to open them with Lynx and then terminate Lynx by using kill.

Here the code I wrote but its not correct

for i in links.txt
do
lynx $i
sleep 10
pkill lynx
done

Whats wrong here?

Takkat
  • 144,580
Raja G
  • 105,327
  • 107
  • 262
  • 331

2 Answers2

5

After several iterations...

for url in $(cat links.txt); do
    timeout 10 lynx "$url"
done

Lynx is blocking (and has to be to work) so sleeping doesn't work properly and it also tries to nab stdin which makes piping things "interestingly" difficult. See here for iterating lines in a file.

Lynx can be a bit annoying with its prompts for allowing cookies. You can either change its settings if it's a problem or you can pass in the -accept_all_cookies flag, like so:

for url in $(cat links.txt); do
     timeout 10 lynx -accept_all_cookies "$url"
done

Today I learned about the timeout command, so I'm happy.


To print a status at the end, the only way I can see how is to check the URL is okay separately, like so:

for url in $(cat links.txt); do
     timeout 10 lynx -accept_all_cookies "$url"
     if [[ $(curl -o /dev/null --silent --head --write-out '%{http_code}\n' "$url") -eq "200" ]]; then
        echo "Getting $url successful"
     else
        echo "Getting $url unsuccessful"
     fi
done
Oli
  • 299,380
1

In your script the lynx call locks the terminal and it will never call sleep 10 and pkill because lynx exits just when you hit "Q".

So i whould prefer something different. Why don't you use wget. Something like this:

for url in $(cat links.txt); do
  wget -qO- $URL
  sleep 1
done

wget exits after downloading the link. Lynx is more like an interactive console browser (it locks the terminal), it's not made for scripts.

chaos
  • 28,186