Bu madde henüz onaylanmamıştır.
By: Fatma Zehra AYTAŞ and İrem ALTUN【1】
For decades, computer technology has advanced at a dizzying pace predicted by Moore's Law. However, this golden age is now hitting its physical limits. The "bottleneck" created by the classic Von Neumann architecture is holding back our potential, as data transfer speeds can't keep up, no matter how fast our processors get. So, is there a way to overcome this bottleneck? This is where the scientific world is turning for inspiration to the most complex and efficient processor known: the human brain. In this article, we'll dive into the limits of traditional computing and introduce the exciting world of neuromorphic computing—a solution that aims to break through these barriers by mimicking the principles of the human brain.
The Von Neumann architecture[1] and Moore's Law[2], have been important steps in reaching today's computer technology.
The classic Von Neumann architecture consists of a processor, main memory, and a bus that connects these elements. Main memory contains both data and program instructions. This means that the processor can only receive one instruction or data at a time using the data bus. In other words, no matter how fast the processor is, if memory access is slow, the system will also slow down. While processors have improved as technology has progressed, memory access and data transfer speeds have not been able to keep up with this development due to physical limitations[3]. This problem is named as the "Von Neumann bottleneck". In the era of big data, where data is produced instantly and continuously, the limits and shortcomings of this architecture are clearly felt.
In addition to the fact that the von Neumann model no longer meets certain capabilities Moore's Law is slowing down and Dennard Scaling is almost ending.[4] As a result of this adding transistors to chips became more costly and more energy intensive. These problems serve as a call for changing the approach to computation in the world of computers.
But the goal here is not to find a replacement for Von Neumann architecture. The goal is to find systems that can complement it and help with improving some of its weaknesses.
At this point, neuromorphic computing emerges as a promising approach that will solve those weaknesses and efficiency problems.

The first artificial neuron model was developed in 1943 by Warren McCulloch and Walter Pitts[7], and this model was later called artificial neural networks (ANN). But this model does not fully represent the working structure of the brain. For this reason, time-sensitive and more energy-efficient architectures called Spiking Neural Networks (SNN) were later developed, which are closer to biological reality. SNN’s transmit information with spike signals and allow synaptic connections to change over time. Thus, both learning and adaptation capabilities are gained.[8][9]
Although SNNs were initially run on classical hardware such as CPU, GPU or FPGA, these hardware were insufficient in terms of energy. As a result, new hardware architectures developed specifically for neuromorphic computing systems were needed. Chips like Intel’s Loihi and IBM’s TrueNorth are among the first examples of this technology.
Neuromorphic chips are based on analog, digital or hybrid VLSI (Very-Large-Scale Integration) circuits that mimic the brain's neuron and synapse structures. In these systems, the memory and processor are located in the same place; thus, energy and time losses related to data transfer are greatly reduced.[10]
The purpose of this report is to explore neuromorphic computing, which stands out due to its natural compatibility with data analysis methods such as artificial neural networks, and its ability to operate more efficiently than traditional systems in areas like big data, sensor data processing, robotics, and cybersecurity. Since they mimic the human brain, they also have the potential to produce intelligent behavior.[11]
In this context, the report will cover the background of neuromorphic computing systems, their potential vulnerabilities, improvement recommendations and highlight neuromorphic chips along with their features.

Fig. 3.A timeline specially prepared for this article, summarizing the historical development of neuromorphic computing and its key milestones, from Alan Turing's early machines to modern neuromorphic chips.
In the late 20th and early 21st century, Rodney Douglass and Misha Mahowald brought about explore xeromorphic architecture and the purpose of hardware. During the past decade, numerous companies have been working on neuromorphic computing, adding IBM, owner of True North chips. The progress of neuromorphic computing has been concerned by two important methods. In one respect tries to follow the physical coming into view of the brain, such as Boahen’s neuromorphic circuit and the Neurogrid system at Stanford University. On the flip side, we leave biological demeanor aside and try to decipher mental problems such as reflection, consideration, organization and administration through algorithms.[12]
In 1948 Turing actualized a report for Darwin describing the result of his review into 'how far a machine can do contribute to complex brain functions' section of the brain. The name of the report was 'Intelligent Machinery'.[13] Psychologist Donald Hebb introduced the theory of synaptic plasticity in 1949. Recurrent activation of neuronal circuits can cause long-term reversal of synapses in diverse regions of the brain in next reaction. Such plasticity in synaptic junction is known as a cellular fundamental for evaluation in the central nervous system linked with evolutionary and tuition processes.[14]
It was shown in 1982 by Per-Rett, Rolls & Caan and in 1989 by Tehorpe & Imbert, that vivid layout review and layout categorization may be thorough by people in just 100 msec, even though it includes a least 10 synaptic period from the retina to the temporal lobe.[15]
Between 1989 and 1996, conceptual evidence has collected denote that most of biological nervous design use the clocking of lonely event imaginable or spikes to encode data (see, Softky, 1994; Arbib, 1995; Singer, 1995; Lestienne, 1996).[16] [17] [18] On the same dates, unique has also launching accomplished test with connected recent varies of electronic devices for example pulse stream VLSI technology (see, e.g., Mead, 1989; Pratt, 1989; Horinchi, Lazzaro, Moore, & Koch, 1991; DeYong, Findley, & Fields, 1992; Murray & Tarassenko, 1994; Douglas, Koch, Mahowald, Martin, & Suarez, 1995; Jiu & Leong, 1996; Northmore & Elias, 1996). The spiking neurons, also called the numerical pattern of combine-and-blaze neurons, since Lapique’s 1907, might be traced back to the present.[19] In 1995, study conducted by Gerstner, these models were considered comparatively.
IBM’s TrueNorth chip, built in 2014, has 1 million spiking neurons and 256 million synapses. It was developed to run spiking neural networks rationally, not mutual-goal mission. Power use relates to how usually spikes arise, how many synapses are active, and how far they journey. The chip aid private neuron and synapse programming, which helps in purpose like object credit. Its reading is calculated in STOPS, not FLOPS. By connecting multiple chips, investors have established neurosynaptic systems with millions of neurons. When combinate with von Neumann processors, these cross setups can regulate different type of issue more efficiently. It has CMOS tech.[20]

Fig . 4. A photograph of Intel's "Nahuku" research board, which houses the Loihi neuromorphic chip. Source: Intel (via Wikichip), License: CC BY-SA 4.0.
And in 2018 Intel’s Loihi chip, talented of on-chip learning. It is a 60 mm2 chip produced by Intel’s 14nm process that progresses the cutting-edge modeling of spiking neural networks implemented in silicone. Loihi includes an expansive radius of new property to sphere, for example hierarchical compatibility, dendrite sections, synaptic lateness, and high significantly programmable synaptic tuition norm. Thanks to the spiking convolution form of the LCA, Loihi solves LASSO issue by reaching More than three decree of Exceeding the energy-delay product. This ensures a definitive sample of spike-based computation outrank all famous traditional answers.[21]

As we've seen in this article, the physical and efficiency problems faced by traditional computer architectures are forcing us to turn a new page in computing. Drawing its inspiration directly from the trillions of neural connections in our brain, neuromorphic computing stands out as the strongest candidate for this new chapter, offering the potential to do smarter work with less energy.
But are these next-generation technologies that try to mimic the brain truly flawless? What risks and vulnerabilities are hidden behind this promising architecture?
In the second part of our series, we will put the potential vulnerabilities of neuromorphic systems and the challenges faced by pioneering chips developed by giants like Intel and IBM under the microscope. See you in the next post!
[1]
https://preprod.kureansiklopedi.com/tr/profil/aaltunirem
I would like to thank my colleague, İrem Altun, for her valuable contributions and co-authorship on this article.
Brain-Inspired Computers: An Introduction to Neuromorphic Computing (1/3)
Introduction
Background