fun size teen kimmy granger gets rammed. hot amateur latina sucks.

There have been two interesting keynote speeches in recent memory: Bill Gates at TechEd, and Steve Jobs at WWDC.  They both reflected on the current state of technology, made predictions, and made announcements.  So what of lasting value came out of what they had to say, and how might this impact choices being made for the future?  I’ll give you my $.02 worth on both of them, and see what you think as well (and I’ll try to fix the Blog comments to get some feedback too).

Let’s start with Bill Gates (transcript).  Perhaps the most notable thing coming out of Bill’s keynote is that it will be his last as chairman of Microsoft.  I’m sure he’ll be talking in the future, but not in the same capacity.  What effect might this have on the future of Microsoft?  How much impact has he had in the last few years as chairman, and in what areas of the company?

Continued Increases in Performance

The new trend for increased performance is no longer increases in chip speed (i.e. number of instructions per second).  Now the increases will largely come from having many different cores and processors doing work together in a single machine or on a single chip.  Also, there are systems working together across different machines, even out across the Internet.  Today’s programs aren’t written to take advantage of this Parallel processing and Cloud computing.  New techniques, tools and frameworks will need to be built in order to take advantage of these new resources, or else fall behind.  Sun’s John Gage quote, and Sun’s motto, “The Network is the Computer” is turning into “The Network is the CPU”.  Programmers who don’t learn how to segment and separate their code for distributed processing will soon find themselves at the same disadvantage as those that couldn’t move from procedural programming to event-based and object-oriented techniques.

Changes in Interaction

A big theme in this talk was the future of human-computer interaction.  Now our input devices are basically keyboard and mouse and to a certain extent pen.  The future is all about “natural interfaces” like touch (and multi-touch), voice, and vision. Star Trek, especially The Next Generation, seemed to me to have a nice balance in their technology user interfaces.  There was voice interaction, tactile/touch LCARS screens, and pen-based PADDs.  The Microsoft Surface touch technology, now being demoed for Microsoft Windows 7, could easily implement LCARS.  iPhones have done PADDs even one better by including the communicator.  Really the only illusive technology is the voice-recognition, which is still very hit-or-miss with today’s technology.  The vision systems have a lot to offer.  The Wii implements vision and 3D motion in their controller to great success.  The main lesson here is that developers who do not spend some time looking beyond simple point-and-click and keyboard input may also find themselves falling behind.


The last thing that struck me was a seemingly larger commitment to robotics than I have seen from Microsoft before. In a sense robotics is extending the range and type of outputs in much the way the “natural interfaces” were extending the inputs.  They made the interesting comparison between the robotics development environments today and the computer programming environments 30 years ago.  Developing for robots means developing for a mobile system.  It means being able to process a wide variety of sensors and inputs and make decisions quickly.  It also means programming routines that are constantly running (e.g. keeping the robot balanced, monitoring the environment, etc.) and that run independent from one another.  On interesting component in the Microsoft Robotics Studio is the very sophisticated simulation environment.  What it means is that people can create programs for very expensive or dangerous robots and run a variety of tests and actions without ever needing to test on the actual hardware.  In fact Microsoft has started a Robochamp competition to see how well people can do at programming robots in large-scale scenarios (e.g. DARPA urban challenge, Mars rover, etc.) without needing the hardware (or getting to Mars).  In a sense this is disappointing because you don’t get to do the engineering and inventing of the robot (which LEGO Robotics folks will tell you is more than half the battle).  But in another sense it means the people with a simple download can get a flavor of what this type of programming is like without a huge investment.

Creating programs that talk to a wide variety of peripherals and take input from many sensors is an important trend here.  And learning to program and test in a simulated or virtual environment is also a good thing to learn.  All-in-all I think this was a very good breakdown of some of the challenges facing developers today.

Next we’ll look at Steve Job’s perspective …..