While at a Quantified Self meeting earlier this week I got into a conversation with a guy who runs a health improvement project for long-haul truck drivers. They’re in terrible shape: obesity, heart disease, diabetes and other problems. This comes form spending 12 or more hours per day sitting behind the wheel of an 18-wheeler year after year. The hours spent are largely sedentary. It’s compounded by large helpings of truckstop food (“I’ll take the chicken-fried steak with extra gravy.”)
I asked the guy if the truckers were aware of the development of self-driving vehicle technology. He said the truckers he knew were about the last people who would be reading about artificial intelligence and how it’s coming into driving. In other words, if the technology starts to replace driving as job they’re not going to see it coming.
And it is coming. In an article today titled ”Futuristic cars are coming faster than you think,” there’s a rundown of companies actively developing self-driving cars and equipment. Google, of course, Mercedes, Cadillac, Audi, BMW and others are in the game, licking their lips over the profits. Nevada was lobbied to pass laws permitting the testing of autonomous vehicles in the state. The National Highway Traffic Safety Commission is testing networking equipment for cars so they can communicate with each other.
The first and most compelling force behind this is to get rid of paid drivers in business. Truck drivers, cab drivers, delivery van drivers are an expensive pain in the neck, like all employees, and companies want to get rid of them. Ravi Pandit, CEO of KPIT Cummins, puts it pretty directly: “There is a strong business case for an autonomous car that can drop you off or a cab without the expense of a driver.”
And the jobs lost? Tish-tosh, the guys making the money don’t spent a minute worrying about that. They’re not the ones who are going to lose their jobs. They wave a hand and say, “Technology has been displacing jobs for 200 years and people always find something else to do.”
So the truck drives I mentioned at the start won’t have to worry about getting fat and sick sitting behind the wheel for much longer. GM estimates it will start selling semi-autonomous vehicles in 2015 — three years from now — and fully autonomous cars by 2020.
New York City put out a YouTube video recently of it’s “Taxi of Tomorrow.” It’s got some convenient features…including anti-bacterial upholstery so the back seats won’t smell so funky. (Hmm, I wonder why?)
Meanwhile, Google’s self-driving car program took a blind man, Steve Mahan, for a drive the other day. Self-driving cars would be good for blind people who have to be driven by a friend or take a cab everywhere. But with Google’s car you just get in and tell it where to go. No driver needed.
So if you’re in taxi-driving school it looks like there’s going to be a short future for New York City’s new cab because the one coming just after that is likely to be a cab with no driver at all. Folks will just call the cab up, get in, and tell it the destination. No more surly drives who barely speak English. And no more funky smell in the back!
#rustingaway (Taken with instagram)
One of the few pleasures of growing older is watching ideas come ‘round again and again and enjoying a quiet chuckle while thinking “here we go again.” Kevin Kelly on Google+ shared a NY Times article about a company called Factual tells me the wheel just keeps on turning. Factual aspires to gather and bank all the “facts” in the universe.
Back in the ‘70 AI programmers built “expert systems” that tried to compile complete databases of knowledge by interviewing “experts” such as doctors. They ran into problems because so much expertise turned out to be “tacit” knowledge gained by docs from years of experience. It wasn’t quantifiable and amenable to database capture. The latest incarnation seems to be IBM’s Watson system that will be “trained” as the Sloan-Kettering cancer center. Here’s hoping they’ve learned a lot from earlier failures to meet expectations.
In the ‘90s came Knowledge Management (KM), the idea was that companies could capture all the “knowledge” their employees had, put it in a database, and then share it over the new corporate networks being bought. Initially KM saw knowledge acquisition as a four tier idea: 1) “raw data”; 2) “information” (aggregated data that meant something); 3) “knowledge” (information adequate for good decision making and actionable); and, finally, “wisdom” (making the “right” decision). KM is still around but hasn’t set the world on fire as promised. They’ve found, evidently, that the first two levels are pretty useless unless real people can generate the second two things: knowledge and wisdom. The Factual company seems to be starting—again—with data. Well, clean data might help avoid the old saying: “garbage in, garbage out.”
I’m not at all opposed to building on what went before to make something improved. But pardon me if I don’t get too excited (or become an investor, haha) until I see something that hasn’t been done before.
Here’s quotes from a Physorg.org article about a monitoring device being developed in New Zealand that measures a bunch of physiological factors and uses your smartphone to communicate the data back and forth.
Researchers in New Zealand have developed a prototype Bluetooth-enabled medical monitoring device that can be connected wirelessly to your smart phone and keep track of various physiological parameters, such as body temperature, heart rate, blood pressure and movements. The prototype could be extended to include sensors for other factors such as blood glucose as well as markers for specific diseases. The connectivity would allow patients to send data directly to their healthcare provider and receive timely advice and medication suggestions. […]
The team adds that the mobile phone can be used as a gateway to further relay patient health data to a remote database via the mobile network for remote diagnoses. “Any medical instructions can be sent back instantly to the mobile users,” the team says. “The use of standard development tools makes it possible for patients to easily use everyday mobile devices for their personal health monitoring and assessment anytime anywhere.” They add that, “Bluetooth and mobile networks enable wireless communications among mobile users, medical professionals and other healthcare givers in an easy, secure and efficient manner.”
Sounds pretty impressive. The sensor technology is galloping ahead, but what I wonder is how are all the protocols and middle-ware parts of the system to be worked out? Usually the device marketers talk vaguely about sending the data to your doctor or “provider” for them to use in diagnosing or monitoring your condition. I can’t imagine, however, that current physicians or existing ancillary staff are going to be sitting around to review data coming in from patients. Doctors sure don’t have the time to look at anything like near-real-time information. It would be prohibitively expensive if doctors even tried. So who would review the data? Physician assistants? Some new breed of technicians? Would there be direct feedback to patients?
It seems to me that a really solid, large scale computer system as an intermediary is going to be essential in all this health data schlepping. The user (patient) is going to need a profile that includes full health history, family history, and evolving baseline data against which new information can be compared. What’s normal? How does the incoming stream get interpreted for anomalies, short-term fluctuations, long-term trends and other things that would indicate somebody ought to take a look? What are the red flags for the individual? What are the action triggers in the data, and who does something when an important event is detected? If it’s a DIY proposition for users, where are the quality, easily-accessible medical information sources they can call upon to support them?
So it seems to me that very insightful and reliable information systems as well as all kinds of protocols between users and their health care agents is just as crucial as all the little gizmos in this field of mobile health monitoring. But I don’t see anywhere as much buzz about those things as I do about the easy part, the monitoring technology.
Yikes! World urbanization growth graphic 1950-2050 -
It’s a whole new world, that’s for sure!
David Autor, and economics professor at MIT, is co-author of a report that confirms the suspicions of many that the low-cost labor in Asia is costing jobs in the US and damaging whole communities effected by lost manufacturing.
People drop out of the labor force, and the data strongly suggest that it takes some people a long time to get back on their feet, if they do at all.” Moreover, Autor notes, when a large manufacturer closes its doors, “it does not simply affect an industry, but affects a whole locality.”
The study looked at offshoring impact between 1990 and 2007. The finding:
At the start of that period, low-income countries accounted for only about 3 percent of U.S. manufacturing imports; by 2007, that figure had increased to about 12 percent, with China representing 91 percent of the increase. […] All told, as American imports from China grew more than tenfold between 1991 and 2007, roughly a million U.S. workers lost jobs due to increased low-wage competition from China — about a quarter of all U.S. job losses in manufacturing during the time period.
“People like to think that workers flow freely across sectors, but in reality, they don’t,” Autor says.
Laissez-faire capitalists, supercilious CEOs, and technology engineers cavalierly wave off employment problems caused by offshoring and automation. They just say people have been finding new things to do since the beginning of the industrial revolution. So it’s great to see a top-ranked school like MIT provide hard data to dispel superficial myths.
The meeting of the AAAS (American Association for the Advancement of Science) has wrapped up it’s annual meeting sounding the alarm that scientists have to fight back against anti-science in America. “It’s about persuading people to believe in science, at a time when disturbing numbers don’t,” said meeting co-chair Andrew Petter, president of Simon Fraser University in this western Canadian city.
Well, duh! Twelve years ago when George Bush was elected and the anti-stem cell campaign by the religious right was in full swing, I contacted the AAAS to suggest to them they needed to encourage scientists to get out and explain science to the public. I don’t have any connections inside the AAAS so I just sent an email to their PR department saying I thought scientists needed to get out of the labs more and do public communication as part of their role. I said the AAAS should be preparing scientists for public speaking and dialogue. I got back a reply that said if anybody wanted to know more about science they could subscribe to Science, the expensive, highly technical journal of the AAAS.
It pissed me off. There was a kind of arrogance in it. They seemed to be saying scientists are above the fray, and their job is just doing science, not engaging the illiterate public and dabbling in politics. Well, finally, they may be getting the message.
I don’t know whether to think: “Too little too late,” or: “Better late than never.” It’s both. I count myself lucky to have been in school in California when the Soviet Sputnik scared the crap out of Americans and thrust as many boys (yes, boys) as possible into science in school. I rode the fear-driven wave of generous tax money to a strong, relatively inexpensive education.
Coupling military science research with the riches of intellectual talent that the US got from Europe during the Nazi terror and then oppression many places, America became for a time the undisputed leader in science. (I was reminded by a Nova program last week that the American space program including the Saturn rocket that took US astronauts to the moon—largely a military adventure—was headed by Werner von Braun, the guy who developed the Nazi V-2 rocket program that damn near tipped the war back toward Nazi victory.) The era of greatest American science was fueled, in ample part, by the notorious “brain-drain” of the best and brightest from other parts of the world.
I mark the point at which the US jumped the shark in science at the refusal of Congress to fund the Superconducting Super Collider in Texas because it was too expensive for the budget. Sound familiar? The US cheaped-out on science leadership and now we’re steadily dropping in indicators of science achievement compared to strong committed countries like China.
Science may indeed be in trouble in the US because this is not a country that has ever really valued intellectual achievement and the remarkable knowledge that science brings. We like the comforts and conveniences it brings, but we heap riches on jocks and celebrities, not “egg-heads.” Nevertheless, science is roaring ahead. It is fully international and participated in by smart people all over the world. The Super Collider ended up as the CERN facility in Switzerland/France looking for the Higgs boson. China is making plans to go to Mars. Science will surge ahead along with the socioeconomic benefits of it in countries like China that have large populations of smart, hard-working young people who live in a culture that values education and the achievements that come from it.
It’ll be interesting to see if the calls for action at the AAAS meeting result in anything in this country. I rather doubt it. But things have a way of working out. If you value science, don’t worry. Science is happening in a big, big way, but perhaps not so much by your fellow Americans. While we’ve been thumping our chests about “American exceptionalism” others are really doing exceptional things at an ever increasing rate. Those who do the hard work ande those who support them will reap the benefits, and that’s as it should be, right?
Science under attack by big business