1

I have a large folder with subfolders of pdf files. I want to get all the pdf files (290 of them) and get them into one directory. (I thought that would be simple.) so that I can concatenate them into one document with my pdf program pdfshuffle, or something like that. I got the files path and names into a file: out.txt, using: find $(pwd) -iname '*.PDF' > out.txt out.txt contains paths and files like this:

...
/home/user/Downloads/pedals/PEDALS/Electro-Harmonix/Small Stone Phaser.PDF
/home/user/Downloads/pedals/PEDALS/Electro-Harmonix/Dr. Quack (Doctor Q Melhorado).pdf
/home/user/Downloads/pedals/PEDALS/Marshall/Guv'Nor 1.PDF
/home/user/Downloads/pedals/PEDALS/Marshall/Guv'Nor 2.pdf
...

cool. so then I thought I could do this:

#!/bin/bash
input="out.txt"
while IFS= read -r line
do
  #echo "$line"
cp $line .
done < "$input"

But it only coppies about 36 of the pdfs, and the shell says it cannot stat files:

...
cp: cannot stat 'Boogie/Vtwin.pdf': No such file or directory
cp: cannot stat '/home/user/Downloads/pedals/PEDALS/Mesa': No such file or directory
...

The file is clearly there, and I can view it etc. whats the problem with my method or script?

j0h
  • 15,365

1 Answers1

1

Handling filenames containing spaces or other funny characters is easily done, using find and xargs. Read man find xargs and do something like

find . -mindepth 1 -type f -iname '*.pdf' -print0 | \
    xargs -0 -r echo mv -t $PWD

to verify commands, and when they correspond to expectations...
remove the "echo" to actually execute the commands.

Hannu
  • 6,605
  • 1
  • 28
  • 45
waltinator
  • 37,856