Future of the Infrastructure, featuring: The Internet

From Science News:

Lee Rainie, director of the Pew Internet & American Life Project in Washington, D.C., “There’s a sense that people are marching, not necessarily blindly, but certainly without full knowledge, into a future that they don’t fully know.”

How the Internet will change the world — even more

Fighting the (cyber)bad guys

An article today regarding “cyberattacks” on the government and banks leading to lose of intellectual property and capital. Now there’s discussion of creating a “cybersecurity ambassador”. I like that “cyber” is likely to be a part of Homeland Security. Maybe I really will be able to get a job after I graduate, if I graduate….

Skinput: Appropriating the Body as an Input Surface

This came today as part of the ACM news. I wish I had the skills to do something like this. Sadly, the logic of programming and time are against me.

A combination of simple bio-acoustic sensors and some sophisticated machine learning makes it possible for people to use their fingers or forearms — and potentially, any part of their bodies — as touchpads to control smart phones or other mobile devices.

The technology, called Skinput, was developed by Chris Harrison, a third-year Ph.D. student in Carnegie Mellon University’s Human-Computer Interaction Institute (HCII), along with Desney Tan and Dan Morris of Microsoft Research. Harrison will describe the technology in a paper to be presented on Monday, April 12, at CHI 2010, the Association for Computing Machinery’s annual Conference on Human Factors in Computing Systems in Atlanta, Ga.

The full article is available at CMU.edu.

PhD student, Chris Harrison, has a website about his project. He says:

Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception (our sense of how our body is configured in three-dimensional space) allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area.

I look forward to reading the full text.