1

I was studying AI when a question came to my mind.

Basically, I still can't figure out in what AI differs from traditional programming. I don't mean to doubt the cleverness of its solutions, methods and applications, but they just seem to me the natural evolutions of machines performing more and more complex tasks, not a disruptive breakthrough with respect to the past. The improvements and progress are crystal clear, but I don't find equally clear how, when and where the 'intelligence' begins.

Algorithms and more in general IT have always tried to internalise into machines human intuitions. The technological progress, indeed, has made it possible to deal with formerly intractable amounts of data, and to do it with more power and efficiency, allowing the implementation of more and more human-specific abilities, to but this also seems to me more a progress then a change of paradigm, nothing I would call 'intelligence' unless having to use that name for the whole stories.

Basing on my perpexities, I ask: what (in your opinion or according to authoritative experts) justifies the use of the word 'intelligence' paired with 'artificial'? how to recognise, at least roughly, the borders between this technologies and the former ones? I know it's a general question, so everyone feel free to decline it according to his/her sensitivity to (and expertise of) the topic.

2 Answers2

0

Indeed this is a hotly debated question in the field of AI/ML and basically your current view is based on the more traditional intelligence amplification (enhancement) paradigm where AI is a process for creating smart tools and the ultimate goal is not to simulate or exceed human but instead to help humans get stuff done more easily, and indeed under such view it's not that different from all other technologies.

Intelligence amplification (IA) (also referred to as cognitive augmentation, machine augmented intelligence and enhanced intelligence) refers to the effective use of information technology in augmenting human intelligence. The idea was first proposed in the 1950s and 1960s by cybernetics and early computer pioneers... AI has encountered many fundamental obstacles, practical as well as theoretical, which for IA seem moot, as it needs technology merely as an extra support for an autonomous intelligence that has already proven to function. Moreover, IA has a long history of success, since all forms of information technology, from the abacus to writing to the Internet, have been developed basically to extend the information processing capabilities of the human mind.

However, with the advent of big data and tremendous hardware/software improvement of deep learning some multi-modal 'intelligent' systems are believed to be able to make decisions on their own without a human in the loop essentially becoming a dominant or even dangerous role warned by many experts, and this is believed by many to be the path towards artificial general intelligence (AGI). For example, a robot equipped with advanced NLP, computer vision and physical movement continuous learning components can learn its own reward from its environment by observing human behavior known as inverse reinforcement learning which could be a step towards AGI. These systems' learning rules are not explicitly coded from top down functionally specified business logic like most other software only addressing problems in some narrow scope, they're expected to solve many (maybe simple) general problems under many quite different environments.

cinch
  • 11,000
  • 3
  • 8
  • 17
0

There are many different ways which make AI different from traditional programming.

Traditional programming is mostly of hardcore fixed logic based mostly on conditions or arithmetic operations. If-else, loops, conditional operators, ternary operators, exponential operations etc are all based on conditional logic or arithmetic operations involving mathematics. But in real world there are many instances in which you need to take decisions which don't have pre-defined use cases. Let's go through the examples below -

  • An image is basically 3 dimensional array of pixels. Now if you are given a task of identifying car in the image, there are thousands of unpredictable use cases. A car can be of any color, any size (small or big depends on how far car is from the camera), any shape (SUV, sedan, hatchback etc.). Now let's try to solve this using traditional programming tools. How many conditions you can code? The car can be of any color (multiple range of values of pixels in the array), any size (multiple small sub-array sizes), any shape (you can't fix the shape of car array - rectangular, triangular, quadrilateral etc.). On what conditions you will tell your computer to find the car in image? Its impossible using conditions. This is where artificial intelligence steps in. Convolutional networks CNN's, multi layer perceptrons etc layers in neural networks learn the weights and biases which enables the neural network to learn these unpredictable conditions and identify car in your image and draw a beautiful bounding box around it. Artificial intelligence has mathematical calculations but they are nowhere similar to conditional and traditional arithmetic.

  • Similar concept can be applied to Natural Language processing as well. For example if I tell you to complete the sentence "Plastic is _____ for ____". There are multiple options to fill the blanks - Plastic is harmful for environment, Plastic is used for manufacturing, Plastic is bad for health etc. There are many such combinations that can go to fill the blanks. Lets try to solve it using traditional programming. Which all combinations of words from vocabulary can be used to solve this question? How can you create some logic to handle all such questions to fill the blanks for such problems like - "Programming is ___". Which all contexts you need to consider? Its basically impossible to solve for universal use cases using traditional programming tools of conditional operators and arithmetic. Models like GPT can understand the context using embeddings, attention layers etc. and use it to fill those words for you.

Traditional programming and AI can be said to be analogous to linear algebra and calculus. Linear algebra is used for fixed logics and numbers and calculus can be used to solve problems involving ambiguity and dependance.

This justifies the word "intelligence" (it can handle all possible universal use cases once is is sufficiently trained). Artificial, because it was made by humans after all and the above examples I guess are sufficient enough to explain the boundary between traditional programming and AI.

Cheers!! Thank you!

YadneshD
  • 101
  • 2