You are extremely mistaken about the source of your confusion.
It is easiest to simply refute what you are doing. The method you are invoking, required that the ingredients of your computation are independent variables. However, $R_1+R_2$ is clearly dependent upon $R_1$ and $R_2$, so that you cannot use the formula that you are hoping to use.
I am only able to answer your question because you have included the PDF in question. You have totally misunderstood what the answer is trying to say. Yes, you are aware that there is an "ordinary sum" method and a "sum in quadrature" method, and the answer that is given ostensibly looks as if it is a "sum in quadrature" method.
However, you are extremely wrong. Your NCERT textbook is only doing "ordinary sum" and never attempted to do anything more than that. Instead, what it is doing is that it is invoking a theorem that it had never tried to present coherently. On the fortunate side, it is an intuitive one and you should be able to supply a (handwavy) "proof" of its suitability.
In particular, if it were to be a "sum in quadrature" method, you should see that all the $\Delta R$ comes with squares, i.e. as $\Delta R^2$ rather than in first power as the answer key had.
Educational standards like the NCERT face a serious problem. On the one hand, they are trying their best to give students an understandable and easy-to-use first step. On the other hand, they also know that students have yet to learn the basics required to fully understand the prescriptions that they are supposed to teach you. As such, they have no choice but to be internally inconsistent and give you broken tools. This is a fact of life, not their fault.
When you use "ordinary sum" in the addition/subtraction case, you can guarantee that whatever uncertainty of the result, is necessarily within the bounds of the calculation that you are presenting. This is basically the only case in which it is intuitive and good. The "ordinary sum" suffers from way too many defects to be usable as a consistent framework. Instead, at university, you will get to learn about why we do the "sum in quadrature".
Even in the product/quotient case, treated in the same PDF on page 26 under (b), "ordinary sum" becomes an approximation, where you cannot guarantee that the uncertainty of the result is necessarily within the bounds thus calculated. The reason is because you can see that they have thrown away the $\Delta A\Delta B$ term. This means that they have essentially done a Taylor's expansion and then thrown away any term that is quadratic and higher, retaining only the linear term. In particular, note that even in the simple case of product/quotient, the only way to get the formula given to you (or the correct version that university gives you), you have to use differentiation.
Conceptually, that is a dead end. It is nonsense.
Instead, what is happening is that you should be modelling all your uncertainties as normally distributed random variables. The appropriate measure of uncertainty estimates is the standard deviation, and that is usually defined as the square root of the variance, which is explicitly a "sum in quadrature". When you properly handle the expected/mean value of all your ingredients, and these variance stuff, then you will obtain that all the uncertainties are combined in quadrature quite like what the Pythagorean theorem looks like.
Handling the variance stuff properly includes having to deal with what is called covariance. If you do things properly, you will be able to prove mathematically that you can use $R_1+R_2$ as a variable; the cost of doing so, of course, is that you have to include the non-zero covariance. That is why, if you threw this $\Delta A\Delta B$ term away, as you might not have realised that you did it in your method, then you will get the wrong answer. This is the source of why textbooks emphasise that you have to have statistical independence of the ingredients. Because statistical independence of two variables implies that they have zero covariance, and so those terms drop away.
This is an important point: Only by handling covariance properly will you obtain internal consistency in the mathematical theory. And that also means you can choose whether to only use variables that are statistically independent of each other, or use covariance to handle the statistical dependence if there is any. It takes time to teach the latter, and so education standard bodies always choose the former first.
Finally, since there was really never any method that can guarantee that the uncertainties always falls within a certain range, it is thus much more sensible to redefine what you mean by the uncertainty. The statistical spread (i.e. uncertainty) of a result of a calculation is best understood (and defined) as the width of the normal distribution that would best fit the random distribution due to all your ingredients combining into the results. You not just want to estimate the uncertainty range, but give the best overall estimate of this range. Giving a range that is too small is obviously unacceptable, but giving a range that is too big is actually going to cause sophisticated tools in the statistics repertoire to fail in spectacular ways.
Needless to say, education standards like NCERT cannot be expected to teach all students how to do calculus before teaching them how to do experiments. As such, they cannot teach you the correct method right away, but instead have to teach you a useful first crutch.
The missing theorem is that you can treat a single-variable function of a quantity that you are wanting to find the uncertainty of, as if it is a single-ingredient object to propagate the uncertainty thereof, and then later extract the uncertainty that you want out of that.
All of these require calculus; you need to Taylor expand, discard high-order (partial) derivatives, and so on. Again, there are proper mathematical treatments of all these things, but they are locked away behind university gates.