I saw an article on Slate about how Google wants to build the famous talking computer from Star Trek. Google doesn't
want to return links that might contain the answer to your question,
but rather to provide a direct answer. It's a romantic vision. I bet it motivates their engineering team. But it can never be.
There is a big, unbridgeable difference between Google and the Star Trek
computer. Google wants to sell you something. If Google gets to the point where it can
reasonably answer questions like, “What computer is best for me?”, or “Who
has good prices on HDTV sets?”, or "What restaurants are nearby?", I won't be able to trust the
answers, because they are shamelessly influenced by advertizer dollars.
What concerns me is whether I will have any choice in the matter.
The web was once touted as a powerful force for consumers,
disintermediating old industries like TV networks, record studios,
newspapers, and retail stores. But it is far more apt to think of the
web as merely a new channel of distribution, disrupting older channels
because of the internet's lower cost structure, and inserting new and
voracious intermediaries between producers and consumers. Rather than
share cost reduction with consumers, these new intermediaries want to
capture all the savings as profit for themselves.
But an intermediary can only capture these savings if they dominate
this new channel. Unfortunately, the internet makes that easy by
reducing a company's brand name to a few keystrokes. Get that brand
embedded in peoples' heads, and you own the internet as a channel. While Xerox fought for years to keep its name from becoming a verb, Google can laugh all the way to the bank. It may technically lose the ability to prevent a competitor naming itself Google, but it owns the google.com domain name, so who cares?
If Google can successfully provide direct answers instead of links, they will become the
search engine. This gives them a huge advantage over other search
engines, and enormous power over vendors of any product you may want to
search for. Google will own the only path to find products. Amazon is set up as a marketplace, and is in one sense a competitor to Google, but people use Google first, before they even form the thought of buying a product.
Google is already dismantling the
fence between their ad-based search results and organic results. If Google can answer conversationally, they will no doubt completely eliminate any distinction.
If Google becomes "the" marketplace, then they will also wield enormous power
over vendors. They don't have to provide a
direct way to sort products by price. They can sell placement in their
search results, and extract a taste of every sale
Showing posts with label speculation. Show all posts
Showing posts with label speculation. Show all posts
Thursday, May 28, 2020
Saturday, November 2, 2013
Why Androids are a Bad Thing
Building android robots must be absolutely the dumbest thing human
beings are trying to do. It is dumber than genetic tinkering, worse
than warming the climate, crazier even than atomic bombs.
The reason is obvious enough. The purpose of a man-sized, man-shaped, autonomous artificial intelligence, boiled down to its essentials, is to replace man. I have absolutely no problem with robots roving the surface of Mars, where I can't go, or cleaning up radioactive messes that would fry my bacon. But seriously, androids exist to put humans out of work, without replacing our need to eat.
Having replaced telephone operators, drafters, assembly line workers, machinists, and managers, (and with our sights set on teachers and university professors) why are engineers beavering away so earnestly trying to replace everybody else? It's not a very smart thing for smart people to do. Haven't they read Frankenstein? Or I, Robot?
I try to be philosophical. Hominids were evolutionarily successful because they could adapt knowledge and social structures faster than DNA could mutate. Androids can evolve their physical structures and processing horsepower faster than DNA too. Maybe the last Neanderthal admired that gracile, tall-walking Homo Erectus. Maybe I can manage to be proud of our robot descendents too.
I'm sincerely hoping to die peacefully in my sleep before this particular turd hits the turbine. Good luck to you new hands though.
The reason is obvious enough. The purpose of a man-sized, man-shaped, autonomous artificial intelligence, boiled down to its essentials, is to replace man. I have absolutely no problem with robots roving the surface of Mars, where I can't go, or cleaning up radioactive messes that would fry my bacon. But seriously, androids exist to put humans out of work, without replacing our need to eat.
Having replaced telephone operators, drafters, assembly line workers, machinists, and managers, (and with our sights set on teachers and university professors) why are engineers beavering away so earnestly trying to replace everybody else? It's not a very smart thing for smart people to do. Haven't they read Frankenstein? Or I, Robot?
I try to be philosophical. Hominids were evolutionarily successful because they could adapt knowledge and social structures faster than DNA could mutate. Androids can evolve their physical structures and processing horsepower faster than DNA too. Maybe the last Neanderthal admired that gracile, tall-walking Homo Erectus. Maybe I can manage to be proud of our robot descendents too.
I'm sincerely hoping to die peacefully in my sleep before this particular turd hits the turbine. Good luck to you new hands though.
Tuesday, October 29, 2013
Creating Frankenstein's Monster
In the familiar story of Frankenstein, Dr. F creates a
monster, which later destroys him. Conventional criticism of
Frankenstein refers to Prometheus, punished for stealing the
gods' fire, or speaks of Dr. F's flawed relationships. But I draw a
lesson about the unintended consequences of technology.
See, everyone wants to create the monster. In your mind, you see how it will be; new, and big, and so very, very cool. And you will control it. It will do your bidding. So you build the thing, all in a rush of late nights and exciting revelation. It is only when the monster rises from its slab and starts crashing around that you realize your control may be imperfect. Then the monster does something scary and altogether unexpected, and you realize that control was always an illusion. From apps with security holes to drugs with side effects to disruptive technologies that unravel social structures, unintended consequences are the dark side of innovation. When you solve a problem in a new way, you must consider whether your solution enables unintended results forbidden to previous solutions.
RFID tags are one of my favorite examples of Frankenstein technology. An RFID tag works like a paper label, only you can read it instantly, with a radio instead of your eyes. It doesn't matter if the tagged item is upside-down, on a pallet with 99 other items, or behind another object. At first, RFID looks like a very cool technology. It makes inventory or checkout a snap.
But then the monster starts to stir. RFID facilitates locating items in inventory. It also facilitates theft of valuable items without the need to hunt for them in every crate in the warehouse. RFID facilitates instant checkout, replacing human eyes, so it also enables theft by simply removing the tag. RFID provides remote reading. The walls that once kept your stuff apart from temptation suddenly might as well be glass, except that a metal box or plate is opaque, when you expected all tagged items to be visible. RFID tags on credit cards, driving licenses, and passports, and even the tags on ordinary items like subway cards and card-keys identify individuals, evaporating the anonymity of the crowd. If RFID tags cannot be turned off, they are permanent beacons of identity. If they can be turned off, that function enables a potent denial of service attack against any user dependent on the technology.
These risks emerge directly from unintended uses of the technology as designed, in a world with multiple stakeholders. These risks are quite aside from risks arising from errors in realizing the technology. Any risk might have been mitigated in the original design. Some may still be, but only if all the stakeholders voices are heeded. Dr. F may care less that the monster terrifies some peasants, and more when it kills his own wife. The central problem is that Dr. F created his monster without even considering the trouble it might get into.
The more rush in development, the fewer use cases are considered. If you're building a video game, maybe the worst that happens is the game is unplayable due to griefers. If you're embedding software in a device with a long service lifetime, the greater is the chance that someone will exploit any lack of care in a way that you (or your company) will find painful.
Let the technologist beware, lest your name go down in history as the man who created the monster.
See, everyone wants to create the monster. In your mind, you see how it will be; new, and big, and so very, very cool. And you will control it. It will do your bidding. So you build the thing, all in a rush of late nights and exciting revelation. It is only when the monster rises from its slab and starts crashing around that you realize your control may be imperfect. Then the monster does something scary and altogether unexpected, and you realize that control was always an illusion. From apps with security holes to drugs with side effects to disruptive technologies that unravel social structures, unintended consequences are the dark side of innovation. When you solve a problem in a new way, you must consider whether your solution enables unintended results forbidden to previous solutions.
RFID tags are one of my favorite examples of Frankenstein technology. An RFID tag works like a paper label, only you can read it instantly, with a radio instead of your eyes. It doesn't matter if the tagged item is upside-down, on a pallet with 99 other items, or behind another object. At first, RFID looks like a very cool technology. It makes inventory or checkout a snap.
But then the monster starts to stir. RFID facilitates locating items in inventory. It also facilitates theft of valuable items without the need to hunt for them in every crate in the warehouse. RFID facilitates instant checkout, replacing human eyes, so it also enables theft by simply removing the tag. RFID provides remote reading. The walls that once kept your stuff apart from temptation suddenly might as well be glass, except that a metal box or plate is opaque, when you expected all tagged items to be visible. RFID tags on credit cards, driving licenses, and passports, and even the tags on ordinary items like subway cards and card-keys identify individuals, evaporating the anonymity of the crowd. If RFID tags cannot be turned off, they are permanent beacons of identity. If they can be turned off, that function enables a potent denial of service attack against any user dependent on the technology.
These risks emerge directly from unintended uses of the technology as designed, in a world with multiple stakeholders. These risks are quite aside from risks arising from errors in realizing the technology. Any risk might have been mitigated in the original design. Some may still be, but only if all the stakeholders voices are heeded. Dr. F may care less that the monster terrifies some peasants, and more when it kills his own wife. The central problem is that Dr. F created his monster without even considering the trouble it might get into.
The more rush in development, the fewer use cases are considered. If you're building a video game, maybe the worst that happens is the game is unplayable due to griefers. If you're embedding software in a device with a long service lifetime, the greater is the chance that someone will exploit any lack of care in a way that you (or your company) will find painful.
Let the technologist beware, lest your name go down in history as the man who created the monster.
Subscribe to:
Posts (Atom)