TBA BT NOTES - I liked that this book focused more on the societal benefits of AI use cases and the detail to explain how the fuck it actually works with data sets IN DETAIL
These tools aren't replacing human researchers, but instead augmenting their capabilities. This human-AI synergy promises a future where we can unravel complex problems and push the boundaries of human knowledge faster than ever before
Impact/case uses:
- In a study of some 400 college-educated professionals, those who used ChatGPT to assist with writing tasks completed their assignments in half the time. Interestingly, less experienced writers saw improvements in quality, while skilled writers maintained their high standards, but finished more quickly.
- In healthcare, AI is tackling one of the industry's most pressing bottlenecks, administrative overload. By automating tasks like medical coding, AI tools are freeing up valuable time for patient care.
- In one experiment, researchers at the University of Toronto used a group of AI systems, such as AlphaFold, which predicts the structure of proteins, together in concert to identify possible compounds for cancer treatment. With this system, they were able to identify a promising candidate compound in just 30 days, something that typically takes years.
- Stanford University sleep scientist Emmanuel Mignot has shown that AI models can interpret complex sleep data, known as polysomnography, as adeptly as human experts. Furthermore, they've used them to uncover unexpected connections between sleep patterns and various diseases, finding, for instance, specific sleep behaviors that correlate with Parkinson's disease
A better way to explain how AI works:
- The connections, sometimes known as edges in machine learning, are like the synapses that wire our neurons together. Each has what's called a weight, a single number that represents the strength of the connection. And neural networks are built in layers. To visualize this, imagine a giant administrative building with multiple stories, where each floor processes information differently.
- The ground floor, also known as the input layer, receives raw data. As information ascends through middle layers, it undergoes transformations, with each story extracting increasingly abstract features. Finally, the top floor, the output layer, produces the network's prediction or decision.
- How do you teach a machine to recognize letters? To turn a mess of pixels into clean digital text? Say each picture is a grayscale image that's 20 pixels by 20 pixels in size. That's 400 pixels per image. Each pixel is represented by a number from 1 to 100, representing brightness from black to gray to white. So what we have, then, are arrays of 400 numbers, ranging from 1 to 100, each labeled with the right letter it shows. So that's our data set.
- Imagine a series of vertical columns, each populated with circular nodes representing neurons. The leftmost column, called the input layer, has 400 neurons, one per pixel. And the rightmost column, the output layer, has 26 neurons, one for each letter of the alphabet. In between are some other columns of neurons, which we call middle layers.
- Each neuron in one layer connects to every neuron in the adjacent layer to its right. These connections, represented visually as lines between neurons, are the pathways along which information flows through the network. And as you recall, each connection has a weight, a number that determines the strength or influence of the signal it carries. (and each neuron has an associated bias term)
- The bias, just another numerical value, acts as a threshold dictating how easily the neuron activates or passes along information. Basically, how willing the neuron is to fire. And that's it. That's a neural network.
- Training a neural network means gradually tuning the weights and biases. They start out with totally random values which are gradually tweaked and tuned with every round of fetch, so to speak. It starts with what's called a forward pass.
- The data thus flows forward through the network, all the way through the middle layers to the output layer, which yields a prediction, which letters are most likely contained in the image.
- Through a process called backpropagation, the network traces its steps backward, identifying just which connections contributed most to the error. Backpropagation is the unsung hero of deep learning. It allows the network to adjust its parameters, the connection weights and neuron biases, to reduce errors. This process is repeated countless times.
- With each iteration, the network inches closer to accuracy, learning from its mistakes like our puppy does. As it learns, patterns emerge. The different layers in the network help by breaking complexity down into manageable pieces. For instance, early middle layers detect the simplest possible shapes, like edges of light and dark, while deeper layers combine these features to identify larger and more complex shapes, like the lines and loops that combine to form letters.
- The true test comes when we present the system with images it hasn't seen before. If it's been trained well, it should be able to generalize from its training data and accurately classify these new examples. This ability to generalize is what makes neural networks so powerful. It allows them to learn and manipulate patterns with increasing levels of abstraction and sophistication in practically any domain, from writing to photorealistic images to simulated human voices.
ON EMPATHY AND COMMUNICATION:
- there's another domain that AI systems are beginning to model surprisingly well - HUMAN EMPATHY
- researchers discovered that implementing an AI-based conversational assistant increased agent productivity.
- When customers did speak to human agents after engaging with the AI, they were markedly less confrontational. The rate of customers demanding to speak to a manager dropped. It turns out the AI was acting as a buffer, absorbing the caller's initial frustrations and paving the way for more constructive human-to-human dialogue
- Remarkably, patients consistently rated the AI-generated responses as more empathetic.
- Picture a system that can detect the slight tremor in your voice when you're nervous, or the barely perceptible furrow of your brow when you're confused.
- One such system, developed at MIT, can detect signs of depression by analyzing speech patterns and facial expressions. In a study of 142 patients, the AI system's depression assessments aligned closely with those of trained clinicians.
- Safeguarding privacy and preventing misuse will be paramount as these technologies mature. But handled correctly, the hope is that AI will help deepen our understanding of each other, as humans, and even as animals, by revealing subtleties beyond human perception.