Assuming GNU sort
sort doesn't need an amount of RAM more or even equal to the size of processed file/s it uses available memory and temporary files during processing to sort big files in batches. It is very efficient and does this with no need for user intervention when reading directly from file/s. However, when reading from a pipe or STDIN, setting a value for the buffer size with the option --buffer-size=SIZE might be needed for efficiency.
So what you most likely need is enough disk space that can be freely utilized under /tmp ... if space on disk is not enough, you can try the --compress-program=PROG option (PROG is the compression program to be used like gzip. You need to specify that and it needs to be installed on your system) to compress and decompress temporary files during the sorting process like so:
sort --compress-program=gzip *.txt | uniq -d > dupfile
The crashes are most likely due to using more processing threads/processes in parallel more than your system can handle at once. You can limit that to reduce system load using the --parallel=N option (N can be a number from 1 to 8. The lower the number the slower the processing but system load will be lower as well and crashes will stop) like so:
sort --parallel=2 *.txt | uniq -d > dupfile
These two options can also be used together like so:
sort --compress-program=gzip --parallel=2 *.txt | uniq -d > dupfile
Alternatively, you can do it in two steps first, pre-sort the files one by one and then, use the --merge option on the already sorted files to merge the files without sorting like so:
sort --merge *.txt | uniq -d > dupfile
And of course you can use all three options on pre-sorted files to reduce the load on your system like so:
sort --compress-program=gzip --parallel=2 --merge *.txt | uniq -d > dupfile
To know which duplicate lines came from which file/s, you can use grep with the -F option that will treat whole lines as fixed strings and should give you more performance and the option -x which will exactly match the whole line like so:
grep -Fx -f dupfile *.txt > resultfile