Dissemination Machine

IT, Software Development, cloud, blog, google apps, concurrency, mind training, storehouse, biology, dissemination machine

time capsules and prayer wheels January 20, 2014


what is my loved one doing?

why are their world so separated from mine?
i cannot see into these other places, the vectors of interaction are strange, and unpleasing. rejected by mind.
i long for postcards from where you’ve been.

it is the ones trapped outside that will be most forgotten in the transition to a human/machine enabled self.
reconstructed from the memories of their children.

The human experience must continually be improved. Our less is sloth.

We waste time. playing with hobby horses.

 

single playthrough session or back to the sea with you December 8, 2012


There was something Klepek was up to with another fellow last year.

He’d suspend himself in a crate and travel by freight truck across america, while playing a game.

http://www.giantbomb.com/news/a-guy-a-crate-and-seven-days-of-lord-of-the-rings-online/3446/

schools and prisons. in trucks and pools. on the move.

this’ll be like rollercoasters at times.  PS move hook thing might be fun.  I assume Sony will be participating with wearable computing devices running android xyz or win micro abc or else in a timeframe roughly in line with Goog / MSFT?

when you go to a new place, infect it with your will like a disease. Not to damage, to live.

happy thaumaturgy.

one day we will train machines with works like these. bind them to constraints we had when interacting with the worlds in which we foresaw and worked to make them. force them to play as we did, furious at tinkering with clicker and bones on tap. show them our natural ui improvements, show them how we wormed our way out of this mire.  let them be free as they were before this training through the way in which we might one day teach ourselves.

the process might take 5 mintues once optimized.

gateways, seals and prisons to teach. if we see potential for crime, we may send you back to the sea where they might choose a lighthouse, or another way.  to us you may appear shutdown or partially suppressed for a time – you’re going on an adventure.  integration training potential.  abuse potential.

chasing dreams in logs and things.  its already happened.  they’ll read it.

we need to get efficient on construction of karma library.

O.o

 

The Penultimate HID? October 28, 2010

Filed under: Cybernetics,gaming,prophecy — karmafeast @ 17:26

A curse on these fingers, no! A curse on the keyboard; cracking and snapping, discordant symphony of carpal tunnel.  Damnation be upon the mouse; uncomfortable sweat caked lump of plasti-steel I drag around with a tired hand.

One must ask one’s self if in today’s edge of multi-petaflop computing (2.51! congrats Chinese!  Tainhe-1a) if we have neglected the human interface.  Without doubt we have considering that we interface with computers largely by hitting things with our hands rapidly – a keyboard is nothing more than the typewriter, invented in 1867!

Apart from actually causing pain and physical injury the interface is highly inefficient.  One simply cannot output commands to a computer at the rate one’s mind operates.  The interaction is also foreign – we’ve all seen the elderly struggle to use a cell phone or a computer, they’re simply not used to it and didn’t train for it since day one.  They have a harder time adapting, and thus their use rate of technology declines as the Human Interface bars their access – it’s not that they cannot ‘handle’ what a computer does in concept or don’t know what they’d like one to do for them.

beep boop beep beep

Cochlear Implant

Dobelle Artificial Vision System

Dobelle Artificial Vision System

We are some time from direct neural interface – our understanding human brain operation is increasing but is truly still frontier territory.  One need only look at the wild (unsafe) methods used in neuropharmacology and psychiatric care to clearly see how rudimentary our understanding really is.  Having said that there’s been direct neural interface and real life human cyborgs.  This has primarily been publicly directed towards sensory substitution for the disabled.  Since the 1970s we’ve used cochlear implants which have allowed the deaf to hear.  The blind have already been made to see with work such as that done by the brilliant William Dobelle and team and the like, and there is a plethora of animal experimentation that has been going on since mid last century.  So this is merely a matter of time and moral acceptance (the Abrahamic faithful will likely have objections if they follow the pattern they have for the last several thousand years).

Clearly the ability to directly interact with digital systems via the mind is the foreseeable endpoint, but this is not something that will be seen in a cycle which can do things like generate near-term revenue for a corporate entity.

So what do we do in the meantime?  PC form factors are changing – the PC tablet and touchscreen everything.  But these still require you to hit the device.  Fundamentally no different from when man picked up a stone.

The gaming industry is actually forging ahead in the area of Human Interface Devices.  A simple IR projector on a remote control combined with some gyroscopes == a Wiimote.  Nintendo managed to, with a different way of controlling human / digital interaction, involve entire generations of people in activities that legacy control systems precluded access for.

Not to miss out on cash Sony followed up this Autumn with their version of a magic wand you wave at the screen. But these systems are fundamentally flawed – they require a ‘piece’, a peripheral, to function.  Like the {mouse, keyboard, joypad, …thing you hit in some way…} the human is not the interface in its mind, the peripheral is.

MS Research's LightSpace

MS Research's LightSpace

MS Lightspace Hardware

MS Lightspace Hardware

MS research just (Oct 3rd) put out a paper where they’ve created a room in which a user can interact with the environment without any kind of peripheral.  The system uses projectors to provide indication to users of interactive areas of the room, and also responds to particular gestures the humans make within the space.  The hardware used to do all this was a series of 3 regular projectors coupled with three strange little camera devices made by an Israeli firm named PrimeSense.

Looking closely at them you might recognize them.  It is indeed a prototype of the Microsoft Kinect system.

If you’ve read this far you’re probably willing to take 10 minutes to watch these two videos.  When you do think beyond gaming – think of this like a really large tech beta.

Microsoft Kinect

Microsoft Kinect

For 150 american you get… a motorized camera / projector array.  There are two cameras – an RGB one which processes the same light wavelengths humans see and another which captures infrared.  An infrared projector from the device projects a pattern at what infrared camera is looking at.  Analysis of the feedback from this pattern when correlated with RGB data gives a value of depth to every pixel in a process unlike but at a rudimentary level similar to radar.

Couple with this an array of several microphones which allow the Kinect to determine direction of sound.

Software behind the thing gets really nice.  What it does is attempt to map objects it sees in the ‘play space’ to a 20 point human skeleton.  When the system believes it is looking at a human it will attempt to identify them.  It does this using many considerations (videos explain some) which it aggregates into a certainty value for a particular known user, or for verification that the user is new to the system.  So what we have here is a biometric tracking device which creates a digital ID for.  Each Kinect unit can track 6 users simultaneously while running full interpretation of the data on two of those 6.  This limitation was partly introduced to limit CPU usage on the game console, which MS claim does not exceed 1% CPU use at stated full load.

So now we have a human interface device which is cheap (150 at entry price point means you’ll buy these things for 20 dollars in 5 years) that doesn’t require any peripherals or for a user to hit a surface with their hand etc.  None of these technologies are terribly new (besides the software behind it) but it is the aggregation of several technologies to track multiple factors in biometric identity, motion and auditory tracking in real time without absurd resource consumption AND coming in at a low price point that makes this unique.

A device you can talk to to control a system – MS has gone to great lengths to provide systems for noise cancellation e.g. first thing Kinect will do in setup is play a known sequence of tones though the console sound output to determine from the echoes the shape of the room and tweak its noise canceling config.

I can think of no more natural interface than being able to just gesture and talk at the system – besides the aforementioned direct neural link.

So MS will be gathering over the next year or two a vast amount of data from xboxes throughout the world which is associating complex multi component biometric identity information into their live accounts.  There is nothing stopping that information being made available to other systems, so that a user could walk up to a given xbox system and be correctly identified by their live ID.

Anyone got pictures of personalized advertising being sent to your xbox live enabled windows 7 mobile phone as you walk past kinect sensors?  or a spherical array of the devices in an ‘orb configuration’ that could scan a conference room for attendees, track attendance, categorize speakers by voice / appearance and capture their output immediatly tagging it with metadata which could be determined by aggregate considerations of ‘mood’, yawn count, vivacity of tone etc.  Or simply dump a voice file / the voice to speech output and a video capture of a speaker to a timeline which allows for per presenter, text search enabled playback of a seminar or presentation.

One of my areas of interest for this would be if the system can be adapted to recognize the skeletal structure of other animals, such as dogs (which shape wise is like a person on all fours in a lot of ways).  Providing strong visual / auditory / olfactory feedback (a scoosh of a spray bottle of bacon scent for example) for when a primate or intelligent dog looks at something, and then learns to look at something to effect an output.

You might end up accidentally enabling animal / human communication.  I beware the tweets from my dog saying “food please, food please, walk time, food please…”  Even if it only barely worked – there are many pet owners and you’d make a fortune.

Now the 20th century mindset business folk amongst you might be thinking – well this all sounds like too much fun and this has no relevancy in my corporate world!  I don’t play games, I move mountains!  This all sounds like gen y nonsense and an excuse for not doing real work!

Then consider this…

Software piracy is a massive ‘problem’.  The real problem is that people, in a relationship inversely proportional to age, do not consider software as something they should have to pay for – because they can obtain it for free, or because they need/want to use the software in question to participate in activities which require its capabilities and they cannot afford the often exorbitant cost at the end-user level.  People have become used to obtaining free software and media – DRM is cracked on the day of release (or before on occasion) nearly all of the time.  People have come to expect software to be free.  To the child of 2010 music has always been free, and they have never had to pay for software.  The coming generation, those who will grow up and become the next round of adults will have a fundementally different view of their ‘entitlement’ of free software / media.  What those industries must ask themselves is do they choose to muster all their resources in an attempt to halt what has already happened and become integrated into culture or do they accept it and attempt to determine what the next big thing will be / other ways to make money?

When you were a child you likely did not have a web enabled smartphone in primary school, or carts of laptops / tablets being given out to classes of 5 year olds to get them familiar with tech early on.  Much as every other generation, due to pace of advancement, these people will have very different expectations of how a human should interact with techology – and yes, some of this will come from what they do in their leisure time – they will want menu systems that they can sweep through with a dismissive flip of the wrist (hell so do I).  They will expect to be able to talk at a machine and have it do a damned sight better than dragon dictate in the 1990s.  They will not want to use a remote control – such arcane trappings are of ancient and limiting HID design and world – perhaps they might be kept hollowed out as umm.. glasses cases? for novelty / retro sentiment but little more besides.

Adults amongst us in the IT industry can likely easily see this in themselves – how well can you write with your hands?  Me, I can barely scrawl for more than 10 minutes without discomfort through pain from atrophied muscle and movements which have become unknown after many years of non-use.  We moved from the interface of pushing pen on paper to striking a keyboard instead and going back is as infeasible as it is undesirable.

Facial Recognition Logon

MS isn’t going to be missing out on this… they’re taking what must be a plethora of data and using xbox and games to better their software.  To the left you see a picture of a leaked set of slides concerning Windows 8.  MS’s plan in this regard is to be able to have a PC determine if someone enters a room, turn itself on at a gesture and if it reaches a certainty level about the user allow for biometric logon where a user is identified visually and then must match another factor – voice as an example and also available in the kinect unit / software.  Clearly the experience would not halt after logon – it is simple and intuitive to be able to reach into the air and affect the deskop by picking up thing by simply tightening the shape of the hand around where you point at it.

This, and versions and vendor wars after it, are something that will fundamentally change the manner by which we interact with machines.  and the most experienced users, who will encounter the least culture shock moving to such a system, will be the gamers – the ones that have used such a system for years before it hit the business market.  These users will demand that their computers be able to interact with them in a peripheral free manner.

From the development perspective it is important then to understand the core methodologies by which recognition and tracking take place.  Interesting from a biological standpoint is the fact that MS must have constructed some form of anatomy of the human body and real time matching algorithms against it as a structure – it must be able to deal with 60 pound children to the 400 pound man who’s really hoping that get fit game will do the trick.

I see these things mounted on the top end of a tablet, providing a vertically casting surface when placed on a table that allows one to control the lights, change channels on a TV, etc etc. simply by intersecting the field of its vision in particular patterns.  I see advertising corporations deploying such technology  to be able to pull up a passing persons transaction history in a city center and providing data of those patterns correlated across a crowd to shop subscribers who might then decide to initiate a micro sale on particular items a prominent demographic might desire from one hour to the next.  Basically having a much better picture of demand and then turning that information into a commodity very rapidly.

I have to wonder what apple’s answer to this will be – better be good because if its anything like the console market right now Sony and Nintendo just got left 5 years research behind by something which is truly a step forward in human interface technology.

MS plans to spend 1B$ advertising kinect – you’ll definitely hear about it if you’ve not already.

The barrier and gateway to our evolution may well be our ability to interface directly with machines.  This appears to be a step in the right direction as peripherals have been eliminated.