Why that exact number of electrons in one coulomb? who decided it? there is nothing wrong with the number, it just seems slightly messy. Why didn't the scientific community just settle on an easier number, such as $1\times10^{-19}$ for example?
3 Answers
The charge of $1C$ was derived from the definition of Ampere. If you look at the SI units, you'll check that, surprisingly, intensity of current is a basic unit, whereas charge is a derived quantity. This is a bit weird, because charge is seen as "more fundamental" than current, current is "charge per unit time".
So why is it? Because measuring the charge of one electron is very hard (electrons are extremely tiny), whereas currents are easily measurable.
Consider two straight and infinite parallel wires. The force exerted between the two per unit length is
$$f=\frac{\mu_0 I_1 I_2}{2\pi r}$$
Where $I$ are the intensities, $f$ is the force per unit length, $r$ is the distance between the wires and $\mu_0$ is a constant of known value. If we make $I_1=I_2=I$, we get
$$ f=\mu_0 I^2 / 2\pi r$$
So $I=\sqrt {2\pi r f /\mu_0}$
If we introduce the SI units: $r=1m, f=1N/m$, we get the definition of one ampere.
And then we define 1 coulomb to be $1C=1A\cdot 1s$.
So the value of $1C$ was derived first. Then, Millikan discovered how many coulombs was the charge of an electron.
EDIT for clarifiaction:
This is the historical process that led to the definition of one coulomb of charge.
The Ampere definition has recently been modified.
This answer explains the process for which: 1) The formula of the magnetic force between two straight current-carrying conductors was found. $f\propto I^2$ 2) This was used to define the unit of intensity of current. 3) Then the defintion of charge is striaghtforward. $1C=1A\cdot1s$. It was done like this because measuring currents is easier than measuring charges.
4) Millikan found the charge of the electron. He did it using the existing unit: coulombs. It happened to be $\sim 1.6\cdot10^{-19}$.
5) The definition of Ampere has recently been changed, in order to make it less dependent. However, this change has been such that the figure does not change, because we do not want all books and instruments to become wrong.
- 8,573
I edited the question to be of the right form. The ampere is now defined by fixing the numerical value of the elementary charge (the charge of an electron or proton) in the International System of Units to be exactly equal to $1.602176634\times10^{-19}$ coulombs.
So why not a nice round number, like $10^{-19}$ coulombs, or a nice round number such as 6000000000000000000 ($6\times10^{18}$) for the number of electrons in a coulomb?
The answer is simple: Doing so would break everything electronic. Old ammeters and new ammeters would give different readings. Replacing an old 20 ohm resistor with a new 20 ohm resistor might fry a circuit. Whenever a metrological standard is updated, the new and improved version has to be consistent with the version it is replacing (and it has to be improved as well). The value of $1.602176634\times10^{-19}$ ampere-seconds (i.e., coulombs) for the elementary charge is consistent with the old definition of the ampere, to within experimental error.
- 42,721
- 8
- 81
- 129
The charge on the electron, $1.602176634×10^{−19}$ coulomb which is $6.24509 . . .\times 10^{18}$ electrons per coulomb, was chosen to make the new definition of the ampere be in terms of coulombs and seconds but not kilograms and metres, as close as possible to the less precise and less reproducible old ampere definition which required the measurement of a force ie in terms of kilograms, metres and seconds.
This also meant that except for very precise measurements instruments calibrated before the new ampere was defined would not have to be recalibrated.
- 104,498