Questions tagged [hardware]

For questions involving, but not exclusively about, hardware. (Please use the "hardware-evaluation" for questions exclusively about analysis of hardware components.)

Ideally this tag is for questions that involve hardware, but are not exclusively about hardware.

For questions specifically asking for factual evaluation of hardware, use the "hardware evaluation" tag.

35 questions
11
votes
3 answers

How powerful a computer is required to simulate the human brain?

How much processing power is needed to emulate the human brain? More specifically, the neural simulation, such as communication between the neurons and processing certain data in real-time. I understand that this may be a bit of speculation and it's…
kenorb
  • 10,525
  • 6
  • 45
  • 95
10
votes
4 answers

Are we technically able to make, in hardware, arbitrarily large neural networks with current technology?

If neurons and synapses can be implemented using transistors, what prevents us from creating arbitrarily large neural networks using the same methods with which GPUs are made? In essence, we have seen how extraordinarily well virtual neural networks…
10
votes
4 answers

How does using ASIC for the acceleration of AI work?

We can read on Wikipedia page that Google built a custom ASIC chip for machine learning and tailored for TensorFlow which helps to accelerate AI. Since ASIC chips are specially customized for one particular use without the ability to change its…
9
votes
1 answer

How powerful is the machine that beat the poker professional players recently?

How powerful is the machine that beat the poker professional players recently (DeepStack)?
7
votes
1 answer

Who manufactures Google's Tensor Processing Units?

Does google manufacture TPUs? I know that google engineers are the ones responsible for the design, and that google is the one using them, but which company is responsible for the actual manufacturing of the chip?
Alecto
  • 609
  • 1
  • 7
  • 10
7
votes
1 answer

Are more than 8 high performance Nvidia GPUs practical for deep learning applications?

I was prompted towards this question while trying to find server racks and motherboards which are specialized towards artificial intelligence. Naturally I went to the SuperMicro website. There the chassis+motherboard which supported the maximum GPUs…
Rushat
  • 139
  • 8
7
votes
1 answer

What exactly is an XPU?

I know about CPU, GPU and TPU. But, it is the first time for me to read about XPU from PyTorch documentation about MODULE. xpu(device=None) Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers…
hanugm
  • 4,102
  • 3
  • 29
  • 63
7
votes
1 answer

How do neural network topologies affect GPU/TPU acceleration?

I was thinking about different neural network topologies for some applications. However, I am not sure how this would affect the efficiency of hardware acceleration using GPU/TPU/some other chip. If, instead of layers that would be fully connected,…
5
votes
1 answer

Which artificial intelligence algorithms could use tensor specific hardware?

AI algorithms involving neural networks can use tensor specific hardware. Are there any other artificial intelligence algorithms that could benefit from many tensor calculations in parallel? Are there any other computer science algorithms (not part…
bob smith
  • 51
  • 1
5
votes
3 answers

For an LLM model, how can I estimate its memory requirements based on storage usage?

It is easy to see the amount of disk space consumed by an LLM model (downloaded from huggingface, for instance). Just go in the relevant directory and check the file sizes. How can I estimate the amount of GPU RAM required to run the model? For…
ahron
  • 265
  • 2
  • 7
5
votes
1 answer

Are there any microchips specifically designed to run ANNs?

I'm interested in hardware implementation of ANNs (artificial neural networks). Are there any popular existing technology implementations in form of microchips which are purpose designed to run artificial neural networks? For example, a chip which…
kenorb
  • 10,525
  • 6
  • 45
  • 95
3
votes
0 answers

Why does serving GPT models typically involve GEMV instead of GEMM?

According to AI and Memory Wall, serving GPT models "involves repeated matrix-vector multiplications", but I don't understand why. Let's suppose I am the sole user of a LLM server, so we have a batch size of 1. However, the linear layers in each…
3
votes
2 answers

What is the difference between a normal processor and a processor designed for AI?

What is the difference between a normal processor and a processor designed for AI?
user9947
3
votes
1 answer

What are the aspects that most impact on the inference time for neural networks in embedded systems?

I work with neural networks for real-time image processing on embedded softwares and I tested different architectures (Googlenet, Mobilenet, Resnet, custom networks...) and different hardware solutions (boards, processors, AI accelerators...). I…
firion
  • 269
  • 1
  • 7
2
votes
2 answers

Are artificial networks based on the perceptron design inherently limiting?

At the time when the basic building blocks of machine learning (the perceptron layer and the convolution kernel) were invented, the model of the neuron in the brain taught at the university level was simplistic. Back when neurons were still just…
1
2 3