Some of the most exciting (and scary) aspects of machine learning that you may not know about

The decibel of chatter around artificial intelligence is rising to the point where many are inclined to dismiss it as hype. It’s unfair because while certain aspects of the technology are a long way away from becoming mainstream tech, like self-driving cars, it’s a fascinating topic. After listening to a talk recently by Dr. Eric Horvitz, Microsoft Research managing director, I can appreciate that the number of applications being conceived around the technology is only matched by the ethical dilemmas surrounding it. But in both cases, they are much more varied than what typically dominates the conversation about AI.

For fans of the ethical roads less traveled in AI, Horvitz offered a fair few items for his audience to consider at the SXSW conference last week that alternated between hope for the human condition and fear for it. Although I previously highlighted some of the healthcare applications he discussed, there are plenty of issues he raised that one day could be just as relevant to healthcare. I have included a few of them here.

Interpreting facial expressions

Advertisement

The idea of machine learning being applied to make people more connected to each other improve in subtle ways our communication skills is fascinating to me. One example used was a blind man conducting a meeting and receiving auditory cues on the facial expressions of his audience. The idea is to provide more insight on the people around him so he can have a better sense of how the points he raised are perceived beyond what the people in the meeting actually say. In a practical way, it gives him an additional layer of knowledge he wouldn’t have otherwise and makes him feel more connected to others.

The ethical decisions of self-driving cars

As exciting as the prospect of self-driving cars is, Horvitz called attention to some of the still unresolved, important questions of how they would perform in an accident or when trying to avoid an accident. What decisions would the computer make when, say a collision with a pedestrian is likely and the car has to make a split-second choice? Does it preserve the life of the driver or the pedestrian, if it comes to that?  What responsibility does the manufacturer have?  What values will be embedded in the system? How should manufacturers disclose this information?

Horvitz slide

A slide that was part of Dr. Eric Horvitz’s talk at SXSW this year.

Adversarial machine learning

One fascinating topic addressed in the talk was how machine learning could be used with negative intent —referred to as adversarial machine learning. It involves feeding a computer information that changes how it interprets images, words, and how it processes information. In one study, a computer that was fed images of a stop sign could be retained to interpret those images as a yield sign. That has important implications for self-driving cars and automated tasks in other sectors.

Another facet of adversarial machine learning is the use of information tracking individuals’ Web searches, likes and dislikes shared in social networks and the kinds of content they tend to click on and using that information to manipulate these people. That could cover a wide swathe of misdeeds from manipulation through fake Tweets designed by neural networks in the personality of the account holder to particularly nasty phishing attacks. Horvitz noted that these AI attacks on human minds will be an important issue in our lifetime.

“We’re talking about technologies that will touch us in much more intimate ways because they are the technologies of intellect,” Horvitz said.

Appling AI to judicial sentencing software

Although machine learning for clinical decision support tools is an area of interest in healthcare to help identify patients at risk of readmission or to analyze medical images for patterns and anomalies, it’s also entering the realm of judicial sentencing. The concern is that these software tools that some states permit judges to use in determining sentencing include the bias of their human creators and further erode confidence in the legal system. ProPublica drew attention to the issue last year.

Wrestling with ethical issues and challenges of AI

Although he likened the stage of AI development to the first airplane flight by the Wright Brothers at Kittyhawk, North Carolina which was 20 feet off the ground and lasted all of 12 seconds. But the risk and challenge of many technologies is that a certain point it can progress far faster than anyone can anticipate. This is why there’s been a push to wrestle with the ethical issues of AI rather than address them after the fact in a reactive way, such as Partnership on AI. Eight years ago, Stanford University set up AI100, an initiative to study AI for the next 100 years. The idea is that the group will study and anticipate how the effects of artificial intelligence will impact every aspect of how people work and live.

Photo: Andrzej Wojcicki, Getty Images

5 startups from the SXSW Accelerator that you should meet

Sound Scouts, an Australian-based business that developed a DIY hearing test app that parents can download and run for their children, emerged as the winner of the SXSW Accelerator pitch competition in the digital health and wearables track, according to an emailed announcement from the organizers. The test is cleverly disguised as a game designed to create a more interactive experience for kids but alert parents to any hearing problems that warrant attention from healthcare professionals.

It wasn’t immediately clear what Sound Scouts’ plans for the U.S. market are, but Founder Carolyn Mee said during her initial presentation that she wants to make the product available to adults and children around the world.

Although there was only one dedicated health track, the technology behind a few of the other startup winners have direct or indirect applications for healthcare as well.

Enterprise and Smart Data

Deep 6 AI developed technology to make it easier to match patients with appropriate clinical trials through natural language processing and artificial intelligence. The clinical trial recruitment process is one of the most time consuming and costly aspects of drug development and Deep 6 AI is one of several companies to take up the gauntlet of creating a more streamlined process. Wout Brusselaers is the founder and CEO.

Security and Privacy

UnifyID uses data collected by sensors from an individual’s mobile devices such as GPS, accelerometer, gyroscope, magnetometer, barometer, ambient light, and WiFi and Bluetooth signal telemetries to figure out what makes the owner unique, according to the San Francisco company’s website. The data is kept on the local device, is encrypted and anonymized. UnifyID’s approach can also be applied to desktop and laptop computers. Given the cybersecurity concerns in healthcare over the theft of personal health data it seems like UnifyID’s approach could have useful applications in this sector.

Innovative World

Thimble.io in Buffalo, New York wants customers to discover their inner engineer, their inner maker. A monthly subscription gives users an electronics kit each month that teaches them how to code, hack and construct electronic devices. By playing the long game, stimulating young and older minds to use these kits as stepping stones towards realizing their creative interests, they could help create a new generation of software developers and biomedical engineers wherever they might be.

Augmented and Virtual Reality

Lampix shuns the goggles and other head gear that tends to be associated with augmented and virtual reality. Instead, it takes a more subtle approach. The company’s product lets users adopt flat surfaces like a table to project a computer screen and interact with the screen projection as if it’s a touchscreen. As for healthcare applications, Lampix’s platform could be used as another approach to gaming technology for cognitive assessment to expanding health literacy delivery tools.