Tuesday 8 October 2013

Will These Guys Kill The Computer Interface As We Know It?

How two grade-school friends created Leap Motion, a company that wants to turn mouse-clicks into waves of the hand.

End Of The Interface 
David Holz [left] and Michael Buckwald have built a device called the Leap Motion controller that allows users to interact with computers with a wave of a hand.
Cody Pickens
David Holz took the main stage at this year's South by Southwest Interactive, the annual innovation conference in Austin, Texas, looking like a hobbit on casual Friday. He wore an oversize blue polo shirt and billowy khakis with a big wallet bulge in the front pocket and had a wild nest of curly hair that frizzed around a thinning patch in the back. Even at South by Southwest (SXSW), a gathering teeming with bright-eyed inventors with big ideas and little time for haircuts, the 24-year-old founder of the company Leap Motion, which makes a new motion-tracking controller for computers, stood out as a particularly glorious example of the species geek.
Tesla and SpaceX founder Elon Musk was scheduled to speak immediately after Holz, and Al Gore was up right after that, so eager fans were filing into the auditorium during the Leap Motion presentation, as if it were a kind of opening act—some background music as everyone picked their sight lines for the main event. Holz, who shared the stage with his co-founder and best friend, Michael Buckwald, didn't seem to notice. He spoke with a combination of barely contained enthusiasm and uncanny self-assuredness. The title of the presentation was "The Disappearing User Interface," and it called for a sweeping reinvention of how we interact with computers. "I should be able to log in to any computer and not have to know some language to use it," he said. "I should just do what makes sense to me intuitively. It's on the technology to understand me."
"It's becoming very clear that the thing holding back devices from doing more isn't their power or their cost or ubiquity or size," Buckwald explained. "It's that the way users interact with them is very simple. And that, unfortunately, leads to things like drop-down menus and keyboard shortcuts . . . elements that require people to learn and train rather than just do and create." The audience, many of whom were pecking away at laptops and tablets, perked up. And then Holz began his product demo.
The Leap Motion controller looked like a miniature iPhone and sat on a table in front of a computer onstage. Within an eight-cubic-foot cone of space above it, the controller can track motions as small as .01 millimeters, making it significantly more sensitive than Microsoft's Kinect. Holz started waving his hands above the Leap, and tracer lines danced across the computer screen. He wiggled his fingers, barely perceptibly, and zoomed in on the display until the tracers again filled it, only this time they were following movements within one centimeter of space. He panned around the display to show the tracers in three dimensions. A few people gasped. He stuck both hands out above the device, and a detailed 3-D picture of them appeared on the screen. He pulled up a block of virtual clay and, in a few seconds, sculpted a Bart Simpson–like character in thin air and spun it around for the audience to see from all angles. "I'm very proud that that is now possible," he said simply. The audience cheered.
In the days that followed, a stream of curious conference attendees flowed into the Leap Motion tent behind the Austin Convention Center. Most of them had never heard of the product before, but they understood its implications. Leap Motion is not about gesture control. As Holz explained in his demo, it's about ushering in a new era in which people interact with digital information as directly and naturally as if it were real. "Everywhere there's a computer can benefit from this type of interaction," he'd said. "That means things like tablets and phones but also things like robotic surgery."
One afternoon, hundreds of developers converged on the tent to try to get face time with Holz and Buckwald. A proud few showed off apps they'd already built—one, a security app that, instead of relying on passwords or retina scans, identifies people based on the unique biometric signature of their hands. Another developer set up behind a laptop in the corner and buzzed the crowd with a black quadcopter drone that he was controlling with his Leap by simply weaving his outstretched hand in the space in front of him, like a kid miming an airplane.

It all looked like magic, a roomful of people pawing at the air and grinning at the effects, as if the way they interact with computers would never be the same. To Holz, it was the beginning of a revolution he'd been planning for most of his young life.

Since before he could read or write, David Holz has been obsessed with technology. He grew up in Fort Lauderdale, Florida, in a coastal community of large homes, elderly people, and very few young families. Without friends nearby, Holz busied himself in the garage, taking apart any kind of electronic device he could get his hands on. "I accumulated this supply of electrical stuff from people in my town. Somebody would break their computer and give it to me," he remembers. He'd examine the parts of things he'd dismantled and try to figure out new uses for them.

Holz seems to have inherited the hacker mentality from his parents. When his mother was a girl, she tried to build a rocket; it left an eight-foot-wide crater in the ground. His father built a home chemistry lab as a kid, and after he left for college, his parents had to call the fire department to remove all the hazardous materials he'd been harboring. Shortly after marrying, the couple spent a few freewheeling years sailing around the Caribbean while David's father, a dentist, picked up odd jobs in his field.

Around the age of eight, Holz started channeling his curiosity into making things rather than taking them apart. "I was pretty good at building paper airplanes by then—I had already experimentally verified which ones were good in which ways," he says. But he needed to understand exactly how they worked, so he started fashioning wind tunnels in the garage, using Plexiglas, cardboard, big fans, and weighting and balance systems. His fascination with wind tunnels crescendoed in seventh grade, when he started building one that he hoped could break the sound barrier (it had compressed helium on one side and a vacuum chamber on the other). His parents stopped him before he finished, fearing for his safety. Holz simply switched projects. He read Stephen Hawking's A Brief History of Time and developed a simple way to test the theory of special relativity: by monitoring clocks he would send to places at various altitudes around the world.

In his experiments, Holz realized early on that computers were powerful tools. "I always felt like I was better with technology than without it," he says. But at a certain point, he started to notice the opposite effect. In middle school, he taught himself to use sophisticated design software and began building 3-D models of things he wanted to create. "I could mold a piece of clay in a few minutes, but it would take me, like, five hours to do so on a computer. And so I started saying, 'Well, what's the problem here? Why am I worse with this technology?' "


The Inventors As Kids 
Buckwald [left] and Holz, shown here at age 11, met in fifth grade in Florida and have remained friends since; they are now business partners. 
Courtesy Leap Motion
There had to be a better way to mold virtual clay. "It's like, the computer is powerful enough, and I know what I want, so it's not me but the input system that's the problem," he says. "If I were to design the best way to mold the piece of clay, it wouldn't be to push a bunch of buttons. It would be to use my hands." Like that, the seed for Leap Motion was planted.

Meanwhile, at school, he had befriended a small group of other smart kids who had no interest in sports—among them, a young debate junkie named Michael Buckwald. The group started holding round-table sessions where they'd try to reimagine big ideas, such as the education system and presidential politics.

School itself was a challenge, though, because Holz couldn't get his teachers to answer his incessant questions, especially in math and science. One of them would explain, for instance, that the square root of a negative is an imaginary number, and Holz's hand would shoot up. "I'd be like, 'Okay, I understand that that works, but why do we live in a universe that has that sort of mathematical construct?' Which is actually a very deep mathematical question, and there's a totally reasonable answer, but the teacher would say, 'I'm not going to answer that.' "

College, at Florida Atlantic University, was a little better. Then he headed to the University of North Carolina–Chapel Hill to pursue a Ph.D. in applied math. In some ways, Chapel Hill was a dreamland for Holz. There were mathematicians everywhere, and he was drawn to them because he felt they understood problems "all the way down." Even better, "UNC is the only place in the world where mathematicians have access to as much stuff as most physicists do," he says. "They had giant wind tunnels. They had a huge wave pool so people could understand the math behind waves."

But it wasn't enough. Holz started applying to join different research teams, taking on as many as a dozen projects outside of his studies. There were projects with NASA's Langley Research Center studying laser radars and methane on Mars, a neuroscience project with the Max Planck Florida Institute, a fluid-dynamics project at UNC.

And yet, he kept coming back to his favorite idea: building a new gesture-based way to interact with computers. He'd returned to it periodically since middle school, and by grad school he'd built a prototype. Between that, his other projects, and his graduate work, Holz was spread thin and had to make some decisions. "I sort of felt like, 'These aren't the problems I want to be working on, and maybe I have the skills and everything I need at this point. Do I finish my Ph.D., go work at NASA, and use that position to eventually start a company? Or can I skip all that and just go straight to the company?'" He chose the latter and left UNC without a degree after only about a year.

Amonth after SXSW, Holz sits cross-legged in a black swivel chair in a conference room at Leap Motion's San Francisco headquarters, a bunker-like underground space across the street from the Bay Bridge on-ramp and less than a block from where he shares an apartment with Buckwald. Not that Holz really lives in the apartment—he eats catered meals here in the bunker and often sleeps on his beanbag chair. Some co-workers have taken to calling his hair "the nest."

Like any good digital start-up, Leap Motion has clever names for its conference rooms—in this case, various sci-fi spaceships. There's GalacticaDeath Star, and the one we're in now, Enterprise. The name is apt. One of the longest-running plot devices on Star Trek was called the Holodeck, in which characters could interact with holograms—say, a scale model of a vintage New Orleans jazz club or a combat simulation—for R&R or training.

How Leap Motion Works 
Sensors on the Leap Motion controller capture movement within an eight-cubic-foot cone. Holz's algorithms translate hand motions into 3-D data with .01 millimeter accuracy. 
Cody Pickens

When Holz and Buckwald set out to create their company, they intended to build something akin to a Holodeck. The prototype wasn't pretty—about two backpacks full of electronics that took 30 minutes to set up—but the system's eight networked boxes were sensitive within an area large enough to create what Holz called a "holodesk." Holz had made some breakthroughs in the math behind the machine in college, and the core principles he developed then continue to drive the product today.

Buckwald, who is almost painfully shy, remembers discussing a potential company with Holz back in 2010 and realizing that as crude as the first device was, it represented a big opportunity. Buckwald was only 21 at the time, but since graduating early from George Washington University (with a double major), he had already started and sold an online listings company called Zazuba and spent a year in Madagascar, setting up operations for One Laptop per Child. The weekend of Jon Stewart and Stephen Colbert's Rally to Restore Sanity and/or Fear, Holz came to visit Buckwald in Washington, D.C. The two spent long hours talking about the technology, much as they had discussed so many other transformative ideas in middle school. By the time Colbert packed off, they had decided to form a company. Holz would focus on the math, while Buckwald would help turn his friend's ideas into a business.

The dream of gesture control is not a new one, but it became reality only in the past few years. Nintendo's Wii controller, which came out in 2006, was the breakout device in some ways. And although it was a lot of fun, it was of limited use beyond gaming because users had to hold a special wand. There have been other attempts at gestural interface—other wands, wired gloves, and, more recently, an armband that reads electrical activity in muscles, developed by a company called Thalmic Labs. But until now, the state-of-the-art approach has been that of Microsoft's Kinect, which was released as a game controller for Xbox just days after Holz and Buckwald decided to start their company. It required nothing of the user other than moving around in the space in front of the device.
It's a new era in which people interact with digital information as naturally as if it were real.
At first the Kinect used a technology known as "structured light," in which it projected many points of light across a room and tracked how they were interrupted by a moving object. This works well when detecting relatively large movements, like a golf swing or a punch. But to track small individual finger movements, it would have to measure so many points of light that it would require prohibitively large amounts of processing power. This spring, Microsoft replaced structured light with "time of flight," which works more like radar. By projecting infrared light and measuring the time it takes to reflect off objects, the machine achieves a sense of depth perception and is able to build a 3-D image of what it sees. The new approach is more accurate than structured light, but it's not nearly as precise as Leap Motion's technology. The Kinect works best from a few feet away. Get up close to do fine work, and the accuracy degrades.

Leap Motion works completely differently. Holz compares the information a Leap gathers to that of an analog camera in soft light, which means it can detect subtle shadings that describe the curves and tiny nuances of an object. It then tracks how those shadings change as an object moves. The company has been silent about how exactly the device turns its image files into real-time 3-D motion, but the secret is in Holz's proprietary math. What's perhaps most impressive is that all the processing happens with virtually zero delay (whereas Kinect has long been dogged by complaints about lag time). "We're using only a small percentage of a single core of the CPU [central processing unit]," Buckwald says. "There's no special silicon in the device, and we're using off-the-shelf sensors, off-the-shelf cameras. Everything we do today could have been done 10 years ago"—if only someone had had Holz's math.

Bill Warner, the founder of Avid Technology, which makes multimedia editing products, learned the secret behind Leap Motion shortly after Holz and Buckwald's weekend in D.C. (he agreed to become the company's first investor on the spot). He describes the approach as head-slappingly straightforward. "As with any great invention, those insights are really hard to come up with, but once you hear them you go, 'Of course!' You didn't think of it because you weren't looking at it that way." Holz managed to understand the problem of gestural control all the way down, which allowed him to see things everyone else has missed. "A lot of times with people as smart as David, it's hard to follow them and see what they are seeing and what they understand," Warner says. "That's not the case with David. Part of his brilliance is that he makes things really simple, even for himself."

With the math in place, the more immediate challenge for Holz was accessibility—turning his eight networked boxes into a viable product, either for regular consumers to buy or for other companies to embed in their products. Andy Miller, a former Apple executive, was working as a venture capitalist when he met Holz and Buckwald in 2012. He'd heard stories of these two genius founders, one with crazy Young Einstein hair, plunking backpacks full of hacky-looking but amazing electronics on investors' conference tables. He asked to see a demo. "David was looking extremely wacky that day," he recalls, "and Michael was sort of talking to me with his head down. I was expecting to see what I'd heard about, which was this big system, but they were like, 'We're all set up, this is it.' It was just one little box, and it was pretty beautiful."

Miller invested a significant amount in the company and came on board as president a few months later. "The more time I spent with David, you know, you're just blown away," he says. "I've been fortunate enough to work with Steve Jobs, and David is one of these guys kind of like Steve, where he's a mile wide and deep."
After Miller joined Leap Motion, the company refined the design further, planned an Apple-like app store called Airspace, and created a demo video that went viral; 15,000 developers applied to build software for the device in the first week. "I spent an entire week going through every e-mail that came in asking about partnership opportunities," Miller remembers. "There were thousands. You know: 'We think this can be a big help in automotive.' 'We think this can help people with disabilities.' 'Can you help us with our workflow at Jack in the Box?' "


In late March, a few weeks after SXSW, Victor Luo, a human-interface engineer from NASA's Jet Propulsion Lab in California, stood in front of a Leap Motion controller in San Francisco and operated a one-ton space robot in a lab 350 miles away. The rover, called Athlete (shorthand for "all-terrain hex-limbed extraterrestrial explorer"), has six arms and can fly. NASA built an application that mapped the rover's limbs to a human hand, and Luo was able to move its arms by wiggling his fingers. Luo was performing this feat onstage at the annual Game Developers Conference. As he raised his hand, the audience watched the rover's jets fire on a big simulcast screen. The enormous machine lifted off the ground. Luo's colleague, NASA supervisor Jeff Norris, addressed the crowd: "I want us to build a future of shared immersive tele-exploration—everyone exploring the universe through robotic avatars, not just peering at a picture on a screen but stepping inside a Holodeck and standing on those distant worlds."

The NASA demo is one of the strongest votes of confidence for Leap Motion, and it's far from the only one. In the months since the company started sending out developer's kits and test units, there's been an influx of demo videos of early apps. Google Earth announced it added Leap Motion support, and a corresponding video showed a person's hand zooming Superman-style across the San Francisco Bay, through the courtyard of the Louvre, and out into space. An electronic musician named Adam Somers released a demo of something beautiful he called an AirHarp.

This spring, HP announced that it will start bundling Leap Motion controllers with some PCs and that it plans to one day embed the technology in devices. In the meantime, anyone will be able to buy a controller off the shelf and plug it in as a peripheral. Out of the box, the device will allow users to control some basic computer functions, like cursor movement, but improving existing systems is not really the point. "If we're successful and build something that is a fundamentally better way to interact with a computer, there are essentially an unlimited number of use cases," Buckwald says. "Eventually, anything that has a computer could be controlled with it—every laptop, every desktop, every smartphone, every tablet, every TV, every surgical station, every robot, potentially even a Leap in every car."
It's hard to say what kinds of applications gestural interface will enable.
In the history of computer user interfaces, there have been only two major sea changes: in the mid-1980s, when Apple replaced the old command line interface with the mouse-based graphical user interface, and, more recently, when Apple introduced the world to multitouch mobile devices. In both cases, the intent was to make human-computer interaction more intuitive, to minimize the barriers between man and machine. "If you think about the mouse, it extends your reach to the screen. And the touchscreen extends it further, so you're actually touching the screen," says Warner. "Leap Motion is extending your reach inside the screen."

It's hard to say what kinds of applications gestural interface will enable. Few could have predicted that multitouch would bring, say,Angry Birds. Gestural interface probably won't act as a wholesale replacement for existing interfaces, though. Just as multitouch improved certain functions (flipping through a digital magazine, for example) but not others (creating a digital magazine), Leap Motion controllers and devices like them will excel at some uses and not others. Manipulating a spreadsheet, for one thing, probably wouldn't be any easier with natural interface; the desktop experience is already pretty highly evolved.

And even the most naturally 3-D applications have their limits. One of the first things you notice when you start using a Leap Motion controller is the lack of anything tactile; there's no haptic feedback to help calibrate touch, as there would be in the real world. When I ask Holz about this, he shrugs it off. "Because it's digital, we can put more information in there than you might get in the real world," he says—lighting changes, for instance, can cue when your finger is touching something. And in time, Holz says, virtual haptic feedback is entirely possible, probably by means of focused ultrasound, a process developed by researchers at the University of Tokyo. "I think you may see a lot of that in the near future."

Another limitation: As a user moves his hands in three dimensions, the results appear on a two-dimensional screen. This can be disorienting. Short of building a real Holodeck, of course, it's unavoidable, and getting over that hump will require the development of better 3-D–display technology. Holz imagines Leap Motion integration with head-mounted displays such as Google Glass as maybe the best solution. "It's like I'm in a Holodeck without needing to have a Holodeck. You turn the space around you into a Holodeck." I ask him if the company is in talks with Google to create just that. "I don't think I can say details, but, uh . . . it would make sense," he says.

It's a heady vision, the kind of thing that gets Holz excited, and he spins off into talk of giving people superpowers—for example, the ability to "undo" virtual actions in this fused digital-physical world, the same way you can undo actions in, say, a Photoshop file. Or the ability to sculpt something in midair, then quickly replicate it with a 3-D printer, turning a thought into an object in a matter of moments.

"The idea is that we should be able to have the same sort of fine degrees of interaction with the virtual world as we do with the real world," Holz says. "And that gives us a lot more power. We can define the rules in a digital world however we want, so we can do a lot of things that we just couldn't before. It's one of those situations where, through technology, we can actually be better.

MIT Computer Software Makes The Internet 3 Times Faster

By generating algorithms that prioritize where to send data, the computer outwits human solutions to network congestion.

If you're reading this, you're probably using a version of the transmission control protocol, or TCP, a system that regulates internet traffic to prevent congestion. It works, and it's getting better all the time. But it was a system made by puny humans--surely our machine-overlords can do better.
Yes, and possibly as much as two or three times better, say the MIT researchers behind Remy, a system that spits out congestion-stopping algorithms.
To use Remy, an Internet-goer plugs in answers to a few variables (How many people will use this connection? How much bandwidth will they need?) and what metric they want to use for measuring performance (Is throughput, the measure of how much data is going through, the most important? Or is it the delay, the measure of how long it takes that information to travel?).
The system then starts testing algorithms to determine which works best for your situation. Testing every possible algorithm would be impractical, so Remy prioritizes, searching for the smaller tweaks that will result in the largest jump in speed. (Even this "quicker" process takesfour to 12 hours.)
The resulting rules that the system spits out are more complicated than in most TCPs, according to Remy's inventors: while TCP programs might operate based on a few rules, Remy works out algorithms with more than 150 if-x-then-y rules for operating. The simulations sound impressive: doubled throughput and two-thirds less delay on a computer connection, and a 20 to 30 percent increase in throughput for a cell network, with a 25 to 40 percent slower delay.
But that really only makes Remy impressive on paper. The researchers haven't yet tested Remy on the wide-open Internet, which presents a whole new set of variables the researchers have to account for. It might very well turn out, as the researchers told PC World, that Remy just provides people with a new way to look at the problem, instead of a solution in itself.

Samsung phones and tablets face US import ban

Samsung S4 ActiveApple and Samsung have regularly clashed in court over mobile technologies

Related Stories

Some older Samsung gadgets look set to be banned in the US following a patent row with Apple.
In early August, Apple won a case at the US International Trade Commission (ITC) that found Samsung had infringed two patents covering mobile technology.
That victory called for an import ban on some Samsung products but this was postponed pending an appeal.
A US official overseeing the patent row has now rejected Samsung's appeal, meaning the ban will come into force.
"After carefully weighing policy considerations, including the impact on consumers and competition, advice from agencies and information from interested parties, I have decided to allow the commission's determination... to become final," said US trade representative Michael Froman.
The patents in dispute cover detecting fingers on a touchscreen and the workings of the audio jack on smartphones and tablets. In August, Samsung was cleared of violating four other patented technologies.
So far, it is not clear which products will be banned from sale. In its appeal, Samsung said it had, for newer products, developed its own technologies that did not draw on Apple's patented ideas. The ITC has already approved the workarounds for the disputed technologies.
In August, US President Barack Obama overturned another ITC ruling that called for a ban on Apple products. He issued the first presidential veto for 26 years on an ITC matter when he decided to stop the ban on older iPhones and iPads.
Apple and Samsung have regularly clashed in court over the past few years and have fought patent battles across 10 countries.

How 3-D TV Technology Works

We all know that a television, any television, displays two-dimensional images--so how does a 3DTV create the illusion of that third dimension?
Creating the illusion of 3-D relies entirely on the fact that we have two eyes separated by a particular distance. If each eye is shown the same image shot from slightly different angles, when your brain combines the image, it will appear three dimensional. This is the principle that all 3-D effects use, from your old red Viewmaster to Avatar shown in IMAX. The Viewmaster showed a completely separate image image to each eye, though 3-D movies and television rely on different methods.
In the first method, the two images needed to create the effect are combined into one image. Each image can be altered by a color filter or a polarized filter. With the color filter the viewer will wear 3-D glasses with two different colored lenses. The glasses then block out one of the two combined images so each eye sees a different angle of the same shot, producing a 3-D effect. Orginially this method, called Anaglyph, required 3-D to be created without a color picture, but modern advances have allowed 3-D the use of color with this method--although color quality still suffers. Polarization uses the same princliple but rather than altering the color of an image it alters the waves of light the viewer sees. The glasses the viewer wears have differently polarized lenses which only show one image to each eye. Picture quality is better with this method and it is what is used in most 3-D movie theaters.
The second method involves powered 3-D glasses that have LCD screens for lenses. The glasses are synced to the display via infrared or some other protocol and the two different angles of each frame are shown sequentially to the viewer. The lenses alternately open and shut so each eye sees a complete version of each angle rather than parts of a combined version. This actually works similarly to the old Viewmaster mentioned above but rather than showing each eye a different image at the same time, the images are seen in rapid sequence. This is a very effective method of creating the 3-D effect but it halves the frame rate of the content. Video normally runs at 30 frames per second (29.97 to be exact) so with this method of 3-D each eye is only seeing 15 frames per second. This lessens the apparent smoothness of the content.
Another method of 3-D, without glasses, has been around for a few years but it is just now starting to come to market. This method uses filters or lenses in front of the screen to direct the separate images to each eye. Early versions of this technology required the viewer to maintain a very specific distance and position in relation to screen, as even relatively minor deviations would break the 3-D effect. Today combining the filters and/or lenses with a camera and face recognition software creates the ability to adjust the screen in real time to project to the split images to the current location of the viewer's eyes. Nintendo will be using this technology on their upcoming 3DS handheld, and Microsoft has even created a screen that can project 3-D to 4 people in real time using this same principle.
Even though 3-D viewing has been around for more than century in one form or another, it really is still in it's infancy. Expect more 3-D breakthroughs in the years to come as it's popularity is on the rise again.
This article originally appeared at 3DTVBuyingGuide.com.