Our brain is an incredible pattern-recognizing machine. It processes ‘inputs’ from the outside world, categorizes them (that’s a dog; that’s a slice of pizza; ooh, that’s a bus coming towards me!), and then generates an ‘output’ (petting the dog; the yummy taste of that pizza; getting out of the way!). All of this with little conscious effort, almost impulsively. It’s the very same system that senses if someone is mad at us, or involuntarily notices the stop signal as we speed past it. Psychologists call this mode of thinking ‘System 1’, and it includes innate skills?—?like perception and fear?—?that we share with other animals. (There’s also a ‘System 2’, to know more about it, check out the extremely informative Thinking, Fast and Slow by Daniel Kahneman).How is all of this related to Neural Networks, you ask? Wait, we’ll get there in a second. Look at the image above, just your regular numbers, distorted to help you explain the learning of Neural Networks better. Even looking cursorily, your mind will prompt you with the words “192”. You surely didn’t go “Ah, that seems like a straight line, I think it’s a 1”. You didn’t compute it – it happened instantly. Fascinating, right? It’s not rocket science – you’ve come across the digit so many times in your life, that by trial and error, your brain automatically recognizes the digit if you present it with something even remotely close to it. Let’s cut to the chase.What exactly is a Neural Network? How does it work?As a child, if we got hurt on touching a hot mug, we made sure to not touch a hot mug ever again. But did we have any such concept of “hurt” BEFORE we touched it? Not really. This tweaking of our knowledge and understanding of the world is based on recognizing day-to-day patterns. Quite like us,, computers, too, learn through a similar type of pattern recognition. This is what forms the basis of the working of neural networks.Standard computer programs and algorithms work on simple logic trees – If A happens, then B happens. We can pre-program all the potential outcomes. This, however, eliminates the scope of flexibility. There’s no learning there. And that’s how Neural Networks come into the picture! A neural network is built without any specific logic. Essentially, it is a system that is trained to look for, and adapt to, patterns within data. It is modeled exactly after how our own brain works. Each neuron (idea) is connected via synapses. Each synapse has a value that represents the probability or likelihood of the connection between two neurons to occur. Take a look at the image below: What exactly are neurons, you ask? Simply put, a neuron is just a singular concept. A mug, the colour white, tea, the burning sensation on touching a hot mug, basically anything. All of these are possible neurons. All of them can be connected, and the strength of their connection is decided by the value of their synapse. Higher the value, better the connection. Let’s see one basic neural network connection to make you understand better: Each neuron is the node and lines connecting them are the synapses. Synapse value represents the likelihood that one neuron will be found alongside the other. So, it’s pretty clear that the diagram shown in above image is a describing a mug containing coffee, which is white in color, and is extremely hot.All mugs do not have the properties like the one in question. We can connect many other neurons to the mug. Tea, for example, is likely more common than coffee. The likelihood of two neurons being connected is determined by the strength of synapse connecting them. Greater the number of hot mugs, the stronger the synapse. However, in a world where mugs are not used to transport hot beverages, the number hot mugs would decrease drastically. Incidentally, this decrease would also result in lowering the strength of the synapses connecting mugs to heat.So, BecomesThis small and seemingly unimportant description of a mug represents the core construction of neural networks. We touch a mug on kept a table?—?we find that it’s hot. It makes us think all mugs are hot. Then, we touch another mug – this time, the one kept on the shelf – ?it’s not hot at all. We conclude that mugs in the shelf aren’t hot. As we grow, we evolve. Our brain has been taking data all this time. That data makes it determine an accurate probability as to whether or not the mug we’re about to touch will be hot. Neural Networks learn in the exact same way. Now, let’s talk a bit about the first and the most basic model of a neural network: The Perceptron!What is a Perceptron?A perceptron is the most basic model of a neural network. It takes multiple binary inputs: x1, x2, …, and produces a single binary output.sourceLet’s understand the above neural network better with an analogy. Say you walk to work. Your decision of going to work is based on two factors majorly: the weather, and whether it is a weekday or not. The weather factor is still manageable, but working on weekends is a big no! Since we have to work with binary inputs, let’s propose the conditions as yes or no questions. Is the weather fine? 1 for yes, 0 for no. Is it a weekday? 1 yes, 0 no.Remember, we cannot explicitly tell the neural network these conditions; it’ll have to learn them for itself. How will it decide the priority of these factors while making a decision? By using something known as “weights”. Weights are just a numerical representation of the preferences. A higher weight will make the neural network consider that input at a higher priority than the others. This is represented with the w1, w2…in the flowchart above.”Okay, this is all pretty fascinating, but where do Neural Networks find work in a practical scenario?”Real-life applications of Neural NetworksIf you haven’t yet figured out, let’s tell you, a neural network can do pretty much everything as long you’re able to get enough data and an efficient machine to get the right parameters. Anything that even remotely requires machine learning turns to neural networks for help. Deep learning is another domain that makes extensive use of neural networks. It is one of many machine learning algorithms to enable a computer perform a plethora of tasks such as classification, clustering, or prediction. Neural networks help us in finding solutions to those problems, for which a traditional-algorithmic method is expensive or does not exist.Neural networks can learn by example, hence, we do not need to program it at much extent.Neural networks have the accuracy and significantly fast speed than conventional speed.Because of the reasons mentioned above and more, Deep Learning, by making use of Neural Networks, finds extensive use in the following areas: Speech recognition,Handwriting recognition,Face recognition,Providing artificial intelligence in games,Robotics and its subfields, and many more.In Conclusion…Neural networks form the backbone of almost every big thing you see today. It’s only fair to say that imagining deep/machine learning without neural networks is next to impossible. Depending on the way you implement a network, and the kind of learning you put to use, you can achieve a lot out of a neural network, as compared to a traditional computer system.