This apparent inconsistency comes about because the rule that you add the relative errors when multiplying or dividing the quantities only applies when those relative errors are uncorrelated. But the relative errors in $V$, $I$, and $R$ are correlated with each other. Correlation means, in this case, that if you calculate $V$ (with its uncertainty) from $I$ and $R$, then you use $V$ and $R$ to calculate $I$, the errors in $R$ partially "cancel out" the errors in $V$, giving you a smaller error in $I$ than the formula indicates - in fact, just enough smaller so that it precisely matches the original uncertainty you started with.
Here's a simple example that should make this clear. In this example, I'm using the formula $A = B + C$ (instead of $V = IR$), and I'm pretending that all three quantities can only have integer values.
Suppose you measure that $B$ could be $-1$, $0$, or $1$, but your measurement device is not precise enough to distinguish among the three values. You might write this as $B = 0\pm 1$. Similarly, suppose you measure the same possible three values for $C$: $-1$, $0$, or $1$, which you would write as $C = 0\pm 1$. (Yes, this is a weird example, but there's a reason I'm choosing to make the average value of both measurements zero.)
The real way to calculate the uncertainty of $A$ is to actually consider all the possible values $B$ and $C$ could take on. There are nine possibilities:
$$\begin{align}
B &= -1 & C &= -1 & A &= -2 \\
B &= -1 & C &= 0 & A &= -1 \\
B &= -1 & C &= 1 & A &= 0 \\
B &= 0 & C &= -1 & A &= -1 \\
B &= 0 & C &= 0 & A &= 0 \\
B &= 0 & C &= 1 & A &= 1 \\
B &= 1 & C &= -1 & A &= 0 \\
B &= 1 & C &= 0 & A &= 1 \\
B &= 1 & C &= 1 & A &= 2
\end{align}$$
So the range of possible values for $A$ is $-2$ to $2$, which you could write as $0\pm 2$. That gives an uncertainty of $\delta A = 2$.
If you use the rule you've learned for adding uncertainties, you would find
$$\delta A = \delta B + \delta C = 1 + 1 = 2$$
So far so good; in this case, the rule you've learned gives exactly the right uncertainty. That makes sense because the rule works for uncorrelated errors, and the uncertainties in $B$ and $C$ are, in fact, uncorrelated. No matter what value $B$ has, the distribution of values that $C$ can have is the same, and vice versa ($B$ has the same chance of being $-1$ for any value of $C$, and similarly for $B = 0$ and $B = 1$).
Now what happens if you rewrite the formula as $C = A - B$? Well, now the formula for uncorrelated errors tells you that
$$\delta C = \delta A + \delta B = 2 + 1 = 3$$
which would imply that $C$ spans a range of $\pm 3$ on either side of its average value of $0$. That doesn't make sense! The allowed values of $C$ are only $-1$, $0$, and $1$; $\delta C$ is supposed to be $1$.
The reason the formula doesn't work is because the errors are correlated: the set of possible values of $B$ now depends on the value of $A$, and vice-versa. For a given value of $A$, the allowed values of $B$ only give $C$ in the range $-1$ to $1$. Specifically:
$$\begin{align}
A &= -2 & B &= -1 & C &= -1 \\
A &= -1 & B &= -1 & C &= 0 \\
A &= -1 & B &= 0 & C &= -1 \\
A &= 0 & B &= -1 & C &= 1 \\
A &= 0 & B &= 0 & C &= 0 \\
A &= 0 & B &= 1 & C &= -1 \\
A &= 1 & B &= 0 & C &= 1 \\
A &= 1 & B &= 1 & C &= 0\\
A &= 2 & B &= 1 & C &= 1
\end{align}$$
For example, if $A = -2$, which is below average, the only option for $B$ is $-1$, which is also below average. When you take the difference $A - B$, some of the below-averageness cancels out, and you're left with $C = -1$, which is a little below average but still within the $\pm 1$ uncertainty it started with. Similarly, if $A = 1$, which is above average, $B$ also has to be at or above the average ($0$ or $1$), large enough that when you take the difference $A - B$ you're left with a value of $C$ within the original uncertainty range.
If you used the formula for uncorrelated errors, it would assume that regardless of the value of $A$, the value of $B$ could be anything in the range $-1$ to $1$. If that were the case, you would get cases like $A = -2$ and $B = 1$, which would give you $C = -3$ - that's where the uncertainty range of $3$ comes from. But in reality, there is no case in which $A = -2$ and $B = 1$. That's why the uncorrelated error formula doesn't work. In the following list, all the pink rows are cases that the uncorrelated error formula is taking into consideration, but which can't really happen:
$$\require{color}
\begin{align}
A &= -2 & B &= -1 & C &= -1 \\
\color{pink}A &\color{pink}= -2 &
\color{pink}B &\color{pink}= 0 &
\color{pink}C &\color{pink}= -2 \\
\color{pink}A &\color{pink}= -2 &
\color{pink}B &\color{pink}= 1 &
\color{pink}C &\color{pink}= -3 \\
A &= -1 & B &= -1 & C &= 0 \\
A &= -1 & B &= 0 & C &= -1 \\
\color{pink}A &\color{pink}= -1 &
\color{pink}B &\color{pink}= 1 &
\color{pink}C &\color{pink}= -2 \\
A &= 0 & B &= -1 & C &= 1 \\
A &= 0 & B &= 0 & C &= 0 \\
A &= 0 & B &= 1 & C &= -1 \\
\color{pink}A &\color{pink}= 1 &
\color{pink}B &\color{pink}= -1 &
\color{pink}C &\color{pink}= 2 \\
A &= 1 & B &= 0 & C &= 1 \\
A &= 1 & B &= 1 & C &= 0\\
\color{pink}A &\color{pink}= 2 &
\color{pink}B &\color{pink}= -1 &
\color{pink}C &\color{pink}= 3 \\
\color{pink}A &\color{pink}= 2 &
\color{pink}B &\color{pink}= 0 &
\color{pink}C &\color{pink}= 2 \\
A &= 2 & B &= 1 & C &= 1
\end{align}$$
You can also envision this graphically, by plotting $A$ against $B$. The colors of the dots correspond to the colors in the table above: black dots are the points that can really happen, while pink dots are the points that can't really happen but that are (wrongly) incorporated into the uncorrelated error formula.

This plot is a discrete version of something you'll see in a lot of scientific papers. When you don't have the requirement that the variables be integers, then instead of discrete points, you'll get a smooth region, but it's basically the same idea.

In this plot, the gray ellipse shows the most likely values for $A$ and $B$ to have, in some hypothetical experiment. The fact that it's a slanted ellipse shows that $A$ and $B$ are correlated. If they weren't correlated, the shape would be a circle, like the pink one, or perhaps an ellipse where the long axis points straight vertically or horizontally. (The reason why it'd be an ellipse instead of a rectangle has to do with probability distributions; that's more than I can get into here.)
For the fun of it, here is an example of correlated uncertainties from the Planck collaboration's paper on cosmological parameters:

This plot shows the correlation between the possible values of the dark energy density and the matter density in the universe. Even though the dark energy density $\Omega_\Lambda$ and the matter density $\Omega_m$ could each individually take on a range of different values, the sum of the two, which represents the total energy density, is known to be very close to $1$ (the diagonal line), which is only possible because the scientists knew to look for this kind of correlation in the uncertainties.