CW because I can't make anything rigorous
I can imagine a situation where unary encoding with $2^n$ qubits is more efficient, in terms of circuit depth, than binary encoding with only $n$ qubits. I can see a use-case where a large number of ancilla qubits are used to more efficiently execute a circuit with the unary encoding, all of which are then uncomputed back to $|0\rangle$.
For example, conventional classical computer engineering and circuit design makes much use of decoders/demultiplexers and multiplexers to switch between the two different encodings.
This is sort of happening in the recent decoded quantum interferometry (DQI) algorithm of Jordan et al. with the one-hot or $k$-hot Dicke states. (Both DQI and the Yamakawa-Zhandry algorithm realize some non-unitary process by implementing a unitary process and decoding errors associated with the desired non-unitarity, all while in superposition.)
As another example, I can envision temporarily switching between second quantization, using $O(d\log_2 n)$ qubits to record which of $n$ modes $d$ fermions occupy, to first quantization, which uses $O(n)$ such qubits. The exponentially more extra qubits used in first quantization amount to additional ancillae. See, e.g., Fault-Tolerant Quantum Simulations of Chemistry in First Quantization by Su et al.
On the above note there probably are some rigorous results in graph theory, showing that for $d$-sparse graphs on $n$ vertices, the adjacency-matrix representation using $O(n^2)$ bits is much more efficient in terms of time-complexity than the adjacency-list representation which uses $O(n d\log_2 n)$ bits. For example, I would much rather calculate the lowest eigenvalue or determinant of a matrix than calculate it with a list - I would convert the list to a matrix, then do the necessary matrix operations. The extra bits are certainly helpful ancilla!