If you’ve heard about “machine learning” lately, it may very well have been because of all the concerns that facial recognition software is causing. Facebook is automatically tagging people that show up in photos. Chinese authorities are tracking citizens’ movements. Police departments in the U.S. are using its output as evidence, sometimes erroneously.
There are good reasons to be concerned, and those are laid out in stories like the Aug. 18 survey “Made You Look: How Folks Are Fooling Facial Recognition” from NPR, and the Aug. 22 compilation of the “10 reasons you should be worried about facial recognition technology”at techxplore.com.
This diary could totally have been about that, but there’s also a kinder, gentler side to the technology. A life-saving side, in fact, and that aspect of it is just getting warmed up. I’m going to talk about two very recent and terrific advances on that front.
But before I do, let’s see if I can convey the gist of machine learning in two paragraphs:
When I used to work in Cambridge, Mass., I would walk across the bridge every morning from my apartment in Boston. And man, every time the wind was blowing from the East, not only would I be acutely aware of that out on the chilly bridge, but it almost always meant it was going to rain that day. I could make that call about 80% of the time.
But if I were a computer with a flexible machine-learning model that had been trained with years’ worth of data, I might have known that it rains 98% of the time when the wind blows from the East, and the pressure drop precedes the rise in humidity and either there’s a sale at Jordan’s Furniture or the Red Sox or Bruins lose by more than two. If such a complex pattern really existed, machine learning could pick it up, but a human being would have a very hard time noticing that. With machine learning, I can team up with my computer to make very good predictions if I collect lots and lots of good data and I allow my model the flexibility to take many different things into account.
And with that, on to a couple of very recent standout advances in the field that are going to lead to lives being saved.
Advance #1: Mobile and inexpensive disease diagnosis with “Octopi”
Manu Prakash’s lab at Stanford just stepped up and helped to fill a serious need for poor people around the world. They created a system called Octopi that can diagnose malaria and other viral and parasitic diseases, in the field, for less than $500 a unit. The official manuscript is here, but a pictorial explanation directly from the lab that’s a lot more real is here:
Let me first tell you a story of why we built Octopi. Imagine you are [a] lab tech in a remote hospital with hundreds of patients waiting all day outside the clinic every day. They all want to know - are they sick from a deadly disease like malaria? You work 10 hrs/day non-stop.
...barely eating, trying to get through samples. Of course you don't have electricity. Not enough hours in the day. And the patients keep coming. This is the life of incredible staff and lab technicians like Praveen and Durga I met in Kalahandi, Orissa.
There are still 200 million cases per year of malaria, and more than 400,000 deaths, with billions more people at risk. It’s freaking terrifying. One of the main reasons for this is that many regions simply lack the tools, as Prakash says and as the WHO has pointed out.
The Octopi tool can screen from 1.5 million to 5 million blood cells per minute, from a finger pinprick, and it’s been trained with machine learning to spot the malaria parasite, Plasmodium falciparum, at nearly 100% efficiency. It’s modular, too, so other diagnostic tests can be plugged in as they are developed. A phone charger has it ready to go for those 10-hour days on one charge, and it only weighs a few pounds.
The low-cost, low-magnification microscope they use is turned into an advantage: low magnification means you get more cells in the field of vision, and the machine learning algorithm can handle that volume, whereas a human being would get worn down by squinting at thousands of tiny cells all day. Early and fast diagnosis is the key. If you get treated within 24 hours of symptoms arising, your chances of survival are much better.
I know that sports and movie celebrities are fun, but these are your real stars right here:
The paper’s authors are Hongquan Li, Hazel Soto-Montoya, Maxime Voisin, Lucas Fuentes Valenzuela, and Manu Prakash. Fist bump, Stanford!
Advance #2: Sorting cancer cells out of blood
Yes, you read that right. A new superfast machine learning tool can actually separate cancer cells right out of blood. That means you can not only detect metastasis and leukemia early, but you can actually help out the immune system by getting rid of many of the culprits.
Machine learning means you don’t need to expose blood cells to toxic dyes or anything else that might hurt them in order to spot the cancerous ones. In fact, you and I can spot cancer cells with our own eyes:
The problem is, you and I can’t visually distinguish thousands of cancer cells in a liter of blood and separate them out by hand.
But the Bahram Jalali lab at UCLA has an answer, as they describe in their July 31 paper in Nature. They’ve devised a way to send blood through a very thin tube at 4 feet per second, so that one cell at a time can be evaluated, and collect a TRILLION data points per second. That is insane data acquisition!
The machine learning algorithm spots the cancer cells within the sample at better than 95% efficiency, and by the time an individual cell reaches the end of the tube, the call has been made, and the cell gets directed one way if it’s cancerous and the other way if it’s not. The cancerous cells get sorted out from the normal ones, at about 300 cells per second. The rest of the blood, with healthy cells, gets returned to the patient.
Not only did they find a way to collect data at such an astonishing speed, but they streamlined the processing of that data by only using 1 out of every 40 laser reflections off each cell and not bothering to construct a human-friendly image. They made correlations only with the crude waveforms that bounced back, because that’s all they needed to make an accurate call. Hey, hey, no pIctures!
Our definitely-ready-for-prime-time researchers in this example are Yueqin Li, Ata Mahjoubfar, Claire Lifan Chen, Kayvan Reza Niazi, Li Pei, and Bahram Jalali.
Prof. Jalali isn’t exactly new at this, by the way. His lab has also devised a method to sort out algal cells with rare combinations of genetic alterations that cause them to accumulate lots of solar biofuel. So much good stuff coming out of that lab. That’s UCLA getting it DONE!
Before I go, just for fun, I wanted to share some art that was made by a machine learning system at The Art and Artificial Intelligence Laboratory at Rutgers. (Go to their site and poke around!) They “taught” different art genres to a computer using machine learning and then had it generate its own art:
P.S. Someone asked me (and rightfully so) in the comments of my last diary to put mouse-over descriptions on my pictures, and I said I would, but because I am a lummox, I only figured out how to do it for the last couple of pictures here. (Is this thing on?! What's the glowy button for?) Next time they’ll all have descriptions, really…..