Dissemination Machine

IT, Software Development, cloud, blog, google apps, concurrency, mind training, storehouse, biology, dissemination machine

time capsules and prayer wheels January 20, 2014


what is my loved one doing?

why are their world so separated from mine?
i cannot see into these other places, the vectors of interaction are strange, and unpleasing. rejected by mind.
i long for postcards from where you’ve been.

it is the ones trapped outside that will be most forgotten in the transition to a human/machine enabled self.
reconstructed from the memories of their children.

The human experience must continually be improved. Our less is sloth.

We waste time. playing with hobby horses.

 

2014 huh? January 6, 2014


So 2014 huh? Oculus rift and alternatives and maturation continuing in NUI with kinect2 or new APPL / primesense offerings?

APPL and MSFT and GOOG need to start talking about replacement rear view mirrors in cars tricked out with peer-to-peer traffic management for crashes, integrated maps, tethering to phone for capabilities of that device rather than their replication in the system! screen display on rear view and / or ability to project from back of rear view mirror onto inside of glass of car windscreen / so to facilitate HUDs in cars which do not have them.

Image

(like this but better and not for watching movies while driving?!)

Its not like we don’t just look at our phones now anyhow while driving. Of course we shouldn’t – but we do. seems safer in field of vision where periphery is viewing forward road rather than view of floor / car interior as user stares at cell phone.

==

MSFT + sign language + live translation – ability to produce consumer applications akin to gift of tongues for online media. Public address system with autotranslate to any given (reasonable) language, even if rough accuracy, enormously powerful. Enables types of societal interactions previously not achievable in real time. Important in societies with oral preservation of histories, diplomacy helper / cost reducer as opposed to live human xlate etc etc.

biblically neat…

why are TWTR and Weibo not cross feeding and auto-translating? this seems like a prime space for inter-culture collaboration between US / CN, it’d be good for the people and 140 character blasts seem like a good thing to know how to xlate quickly from large rate stream pipe from either system. Perhaps no profit in it? Gov resistance from either end? likely. None the less, its too obvious not to be explored.

Image

==

MSFT – Kinect2 is fantastic at vocal recognition.  It can and is used in conjunction with VR headsets like oculus rift (I am so happy these are making a comeback in a manner worth half a damn, in 10 years they’ll look like snow goggles, contacts break, surgery will not be palatable for ‘cosmetics’ for 18-25 i’d say 😦 – worth risk of surgery for medically blind in 10 and on the way if we work it I’d say).

2014 could see some really nice setup for this.  Dunno about the trackmill thing you strap into and run around on, in anything other than niche / dedicated play space.  These things need to be like Kinect, without peripheral.

Oculus we can forgive because it puts a screen in front of our faces because we cannot do that yet with direct / reflected projection onto the eye in the consumer space at high resolution.

So we’ll put on space hats and look at webcam views of the world we’ve blinded ourselves from to augement sight of!

Whomever creates a collaborative minecraft style game that overlays AR projection onto real world locations / allows group graffitti etc. is going to make lots of money once these things around, if they take off and are good and ideally can be made mobile.  damned batteries…

The security space is huge in areas like airports for HUDs on screeners.  Technological capabilities of the civilian market will cause similar systems to be utilized by threat agents.  It seems wise for the governments concerned with securing such zones to achieve technological superiority.

Home 3D printing needs to be more than a plotter that extrudes plastic of the types we used to melt in shop class to make pencil holders for dad’s day.  Mind you, we can make functional guns.

==

Drones are always thought of as flying… why are we not focusing hell bent on sewer cleaning ones, and automated road freight convoys (like they have been in JP for years).

I’ve seen some community people involved in the civilian space – the potential is enormous and many untapped areas such as physical defense and intercept in law enforcement spring to mind when one considers coordinated swarm action and capability to carry varying modular payloads.

==

…In other XBOX ONE – need Office on my XBOX one.  If not to modify then to present via an xbox one in powerpoint, using it as a projector.  Smart glass stuff, secondary display for various apps blah blah.

‘Do your homework’ parental control mode.  puts xbox one in mode where plug in keyboard / mouse and do work.  plausible in apps VM?  S

==

biometric health monitors we wear on our wrists as those techs that be try to convince us we need to carry timepieces other than our smartphones.

What the fitbits / fuelbands / UPs / LGs OLED one etc. need is not an extra piece or earphones (LG) that monitor heart rate, but that funtion occuring in the wristband.

the form factor is important, the bracelets are easy to see, distribute, size, and have a bunch of software core algorithms well written and understood (I assume) for the handling of the data as sourced at the end of the arm.

Heart rate provides a ton of information on biological operating state / disposition of the user actor. It has direct implications in medical triage situations (imagine first responders tagging patients with bands, which then geolocate relative to a workstation mgmt point via bluetooth, wifi or w/e for prioritizing critical care at the scene.

energy density a problem for heart rate monitors in the batteries? They have them in these new smart watch things they’re trying to peddle but those are large, over reaching (except maybe the adidas one) and seem unnecessary…. my bias away from a second compute unit with display screen on my wrist with no apparent real function aside, these things to date all have terrible battery life.

Image

Another issue might be the skin contact area required for the heart rate sensor… but this seems unlikley considering the size and power requirements of medically used heart rate monitor pads – though the cheap disposables tend to rely on a gel conducting adhesive (unknown if conducting layer req or is for stickieness to patient). the watch ones seem to use metal pad sensors.

It’d be interesting to combine the data from a actor attached health / activity monitor with the data from kinect2. could an activity band be made that could operate in a different mode, to stream real time info on the user for the purposes of improving human / computer interaction or the user experience?

I should bloody well imagine so!

==

Parents are NOT going to approve of Jonny strapping on an Occulus rift for 8 hours a day… we need better parental controls targetted towards outcomes and objectives for learning and development of the player / child. Home work verification, feedback programs of gamertag info into education systems to identify and stream learning or identify career potentials (beats sitting doing those daft tests once or twice in school and them telling you to go be an xyz from your 50 questions).

We talk about the quantified man and blah. I’m certain there’s gaggles of education professionals who’d know exactly what to do with metadata which might empower them to inspire learners in an individual sense. To turn leisure experiential into transferrable workplace skill.

We must rise to this particular challenge. Our minds move at speeds now that demand it.

Image

==

#snowden – sigh.  The key that people like Mr. Greenwald keep overlooking is the why we do this aspect of government signal intelligence operations.  Also lacking is a consideration of communications intercept in real time for the purposes of securing the safety of USA citizens, and for forecasting ability derived by analysis of past events and their communications lead up versus patterns that seem similar or odd to a algorithms combing reams of haystacks we worry about being seen.

The NSA does not care about your dodgy porn, as long as its non-criminal.  They have had idiot employees who have spied on their loved ones, but I’d bet so has visa, or the police.

They’ve been infiltrated by a knowledge worker in such a devastating way that some say there isn’t even assessment of the totality of exfiltrated artifacts.  We, as our USA government, have contracted out too much – and procure in ways that lend themselves to this kind of security situation arising.

Here’s what’s also happened – countries who’d see stuff and previously think ‘oh, neato, I bet that’s NSA doing xyz’ or actively help are now mmuch less likely to do so because of a perception stink bomb let off by a guy who worked at a place for 3-4 months before jumping ship to Hong Kong for a techno-traitor-vacation with a bunch of documents other folk wrote and he was employed and trusted to care for.

So how’s this going to impact who wants what from software etc and tech.

Well built in hardware encryption on SSDs, done right off thier controllers and maintaining your 500 MB/s r/w rates you’d expect, are available now in the consumer price range with 1.2% overhead on the encryption (versus ~13% on a non-HW accelerated SSD).

http://www.anandtech.com/show/6891/hardware-accelerated-bitlocker-encryption-microsoft-windows-8-edrive-investigated-with-crucial-m500

Western digitals my passports etc. have them now.  Seems a good way to go.  Work with bitlocker via hardware acceleration.   huzzah!

==

GOOG needs to improve user lifecycle management automation for enterprises within the google apps admin console or associated tools.  Much automation achievable now via APIs and custom code but its cumbersome to say the least and hardly inviting to the organization considering a switch.  Google apps scripts are neat, the 5 minute limiter on scripts to be executed in the cloud is too low when performance in the scripts execution cannot be guaranteed (imo).

Document ownership transfer needs to improve via native tools, as critically does ability to categorize data in drive and secure it centrally.

==

Both O365 and GApps need to very strongly consider the security of their data and proprietary information.  It must be complex to say the least when interaction occurs with the governments of many countries of the world, with differing views towards corporate espionage and rights of access to information which may or may not concern their citizens on online services projected by these corporations.

Users have become hyper aware of security in a way quite unlike that which occurred in the aftermath of the 9/11 incidents.  They distrust the government and now have a pariah feeding them information via sensationalized media presentation.

This does not bode well for perception of government co-operation and collaboration for social projects, security purposes etc.

There must be protection for user privacy.

There are responsibilities of a human using any communications vector as part of a functional society.

Systems of communications are increasingly electronic.

On occasion the governments of the world may have legitimate reason to request that information privately held about a human actor be provided so as to assist with their assessment of the individual – typically as it relates to that individuals likely intent towards causation of harm.

Protocol for such interactions is of paramount importance, as we have clearly seen.

The process is NOT fair.  The process DOES impinge on what would traditionally be thought of as ephemeral thought data and uncapturable without massive effort to basically spy on a person 24/7 live with humans.

Yes, we can probably guess at where crimes might occur on hot days by particular block of a given city if we REALLY looked at the data.  That’s scary perhaps, but also potentially very useful.

We might also be able to educate out root causes of negative patterns in society, or better yet accurately target needed resources for our people for the purposes of improving the lives of the those that live in these places being watched over by big brothers.

Snowden caused hysteria.  Signals Intelligence is a real thing.  Sorry.  Its not palatable.

 

Pride June 2, 2013

Filed under: Biology,Cybernetics,evolution,evolution,expedient means — karmafeast @ 16:23

wanna know a secret? being gay is a choice, as much as is cultivation of any thought patterns. I say this as a gay man with some background in biology and more than a passing interest in mind training.

What is not a choice however are biological reactions that occur when a human body sees what it considers a potential mate. You come hard wired to blush, eye changes, goosebumps etc. and you don’t get to choose when that occurs via external stimulus. A sweeping generalization true – but one that’s held true at least in personal experience and in exhibited behavior from others I’ve seen.

A person can train their mind to suppress upwelling of thought to the point of practical elimination, yet the process of arising seems to come without will. Try being closeted, you’ll see. You become programmed – its not even suffering after a while, you end up seeking solace in hiding in lies.

This twisted cruel built prison, that would deny a person happiness of some kind due to disapproval or hate, is an awful transgression to commit upon humanity and one supported in law. We’ve mixed up bestiality with liking bananas I fear…

We must purge ourselves of this ignorance daily – we resemble too closely animals which would beat to death those with ‘incorrect’ plumage.

How could we ever hope to deal with artificial life we create or whatever comes along if we cannot control even the most simple of biological drives to like self-like and dislike non-like. Differentiation and spite do not have to be axis by which we separate wheat from chaff, worthy from not.

How will we deal with the very real coming fact that we are approaching the end of our dependence on sexes of humans. When same sex couples can have a child we no longer have any quasi-biological ‘excuse’ to be bigoted. I wonder how long we’ll stall looking properly at this, at the barriers in dna methylation etc. How long we’ll try to ban it. Probably until people with power die, seems to be how it is.

So much waiting…

We’ve blessed ourselves with so many layers of intellectualism that we abstract and forget our very primitive, and often disappointingly feral ancestry.

Jealous, petty beasts – we’d strike when we know there’s no competition other than our fear and distaste.

As to being gay – asking / demanding someone to modify their mind so that they suppress the symptomatic expression of thoughts that occur is asking a lot. This is beyond most people, and to be quite frank, it is unreasonable to ask of someone in order to improve ‘your’ life or perceptions within it. You can’t shake mindset out of someone, beat it out of them or convince them of what you believe to be ‘evil’ without causing harm to their mental well being.

We bear no right to do this to each other, yet we do – because there’s more of one group than another and we know we can get away with it.

Why would we do this to ourselves when we realize we have very limited time on this earth? So much is lost with each death and it will continue to be if we don’t pull our heads out of our asses and work to get ourselves out of these decaying bodies, and learn to repair them properly.

Regardless of motivational ‘purity’, one must assess one’s rights to modify the mind of another human – either through social policy which restricts (or makes more expensive) a type of life due to disapproval, or some nonsense arguement about procreation shortage.

Hurry up and continue to hook us up online (properly), this shit will make us laugh when we become more than a single, limited human mind; and maybe feel a little sorrow for those we lost on this trip.

 

The Penultimate HID? October 28, 2010

Filed under: Cybernetics,gaming,prophecy — karmafeast @ 17:26

A curse on these fingers, no! A curse on the keyboard; cracking and snapping, discordant symphony of carpal tunnel.  Damnation be upon the mouse; uncomfortable sweat caked lump of plasti-steel I drag around with a tired hand.

One must ask one’s self if in today’s edge of multi-petaflop computing (2.51! congrats Chinese!  Tainhe-1a) if we have neglected the human interface.  Without doubt we have considering that we interface with computers largely by hitting things with our hands rapidly – a keyboard is nothing more than the typewriter, invented in 1867!

Apart from actually causing pain and physical injury the interface is highly inefficient.  One simply cannot output commands to a computer at the rate one’s mind operates.  The interaction is also foreign – we’ve all seen the elderly struggle to use a cell phone or a computer, they’re simply not used to it and didn’t train for it since day one.  They have a harder time adapting, and thus their use rate of technology declines as the Human Interface bars their access – it’s not that they cannot ‘handle’ what a computer does in concept or don’t know what they’d like one to do for them.

beep boop beep beep

Cochlear Implant

Dobelle Artificial Vision System

Dobelle Artificial Vision System

We are some time from direct neural interface – our understanding human brain operation is increasing but is truly still frontier territory.  One need only look at the wild (unsafe) methods used in neuropharmacology and psychiatric care to clearly see how rudimentary our understanding really is.  Having said that there’s been direct neural interface and real life human cyborgs.  This has primarily been publicly directed towards sensory substitution for the disabled.  Since the 1970s we’ve used cochlear implants which have allowed the deaf to hear.  The blind have already been made to see with work such as that done by the brilliant William Dobelle and team and the like, and there is a plethora of animal experimentation that has been going on since mid last century.  So this is merely a matter of time and moral acceptance (the Abrahamic faithful will likely have objections if they follow the pattern they have for the last several thousand years).

Clearly the ability to directly interact with digital systems via the mind is the foreseeable endpoint, but this is not something that will be seen in a cycle which can do things like generate near-term revenue for a corporate entity.

So what do we do in the meantime?  PC form factors are changing – the PC tablet and touchscreen everything.  But these still require you to hit the device.  Fundamentally no different from when man picked up a stone.

The gaming industry is actually forging ahead in the area of Human Interface Devices.  A simple IR projector on a remote control combined with some gyroscopes == a Wiimote.  Nintendo managed to, with a different way of controlling human / digital interaction, involve entire generations of people in activities that legacy control systems precluded access for.

Not to miss out on cash Sony followed up this Autumn with their version of a magic wand you wave at the screen. But these systems are fundamentally flawed – they require a ‘piece’, a peripheral, to function.  Like the {mouse, keyboard, joypad, …thing you hit in some way…} the human is not the interface in its mind, the peripheral is.

MS Research's LightSpace

MS Research's LightSpace

MS Lightspace Hardware

MS Lightspace Hardware

MS research just (Oct 3rd) put out a paper where they’ve created a room in which a user can interact with the environment without any kind of peripheral.  The system uses projectors to provide indication to users of interactive areas of the room, and also responds to particular gestures the humans make within the space.  The hardware used to do all this was a series of 3 regular projectors coupled with three strange little camera devices made by an Israeli firm named PrimeSense.

Looking closely at them you might recognize them.  It is indeed a prototype of the Microsoft Kinect system.

If you’ve read this far you’re probably willing to take 10 minutes to watch these two videos.  When you do think beyond gaming – think of this like a really large tech beta.

Microsoft Kinect

Microsoft Kinect

For 150 american you get… a motorized camera / projector array.  There are two cameras – an RGB one which processes the same light wavelengths humans see and another which captures infrared.  An infrared projector from the device projects a pattern at what infrared camera is looking at.  Analysis of the feedback from this pattern when correlated with RGB data gives a value of depth to every pixel in a process unlike but at a rudimentary level similar to radar.

Couple with this an array of several microphones which allow the Kinect to determine direction of sound.

Software behind the thing gets really nice.  What it does is attempt to map objects it sees in the ‘play space’ to a 20 point human skeleton.  When the system believes it is looking at a human it will attempt to identify them.  It does this using many considerations (videos explain some) which it aggregates into a certainty value for a particular known user, or for verification that the user is new to the system.  So what we have here is a biometric tracking device which creates a digital ID for.  Each Kinect unit can track 6 users simultaneously while running full interpretation of the data on two of those 6.  This limitation was partly introduced to limit CPU usage on the game console, which MS claim does not exceed 1% CPU use at stated full load.

So now we have a human interface device which is cheap (150 at entry price point means you’ll buy these things for 20 dollars in 5 years) that doesn’t require any peripherals or for a user to hit a surface with their hand etc.  None of these technologies are terribly new (besides the software behind it) but it is the aggregation of several technologies to track multiple factors in biometric identity, motion and auditory tracking in real time without absurd resource consumption AND coming in at a low price point that makes this unique.

A device you can talk to to control a system – MS has gone to great lengths to provide systems for noise cancellation e.g. first thing Kinect will do in setup is play a known sequence of tones though the console sound output to determine from the echoes the shape of the room and tweak its noise canceling config.

I can think of no more natural interface than being able to just gesture and talk at the system – besides the aforementioned direct neural link.

So MS will be gathering over the next year or two a vast amount of data from xboxes throughout the world which is associating complex multi component biometric identity information into their live accounts.  There is nothing stopping that information being made available to other systems, so that a user could walk up to a given xbox system and be correctly identified by their live ID.

Anyone got pictures of personalized advertising being sent to your xbox live enabled windows 7 mobile phone as you walk past kinect sensors?  or a spherical array of the devices in an ‘orb configuration’ that could scan a conference room for attendees, track attendance, categorize speakers by voice / appearance and capture their output immediatly tagging it with metadata which could be determined by aggregate considerations of ‘mood’, yawn count, vivacity of tone etc.  Or simply dump a voice file / the voice to speech output and a video capture of a speaker to a timeline which allows for per presenter, text search enabled playback of a seminar or presentation.

One of my areas of interest for this would be if the system can be adapted to recognize the skeletal structure of other animals, such as dogs (which shape wise is like a person on all fours in a lot of ways).  Providing strong visual / auditory / olfactory feedback (a scoosh of a spray bottle of bacon scent for example) for when a primate or intelligent dog looks at something, and then learns to look at something to effect an output.

You might end up accidentally enabling animal / human communication.  I beware the tweets from my dog saying “food please, food please, walk time, food please…”  Even if it only barely worked – there are many pet owners and you’d make a fortune.

Now the 20th century mindset business folk amongst you might be thinking – well this all sounds like too much fun and this has no relevancy in my corporate world!  I don’t play games, I move mountains!  This all sounds like gen y nonsense and an excuse for not doing real work!

Then consider this…

Software piracy is a massive ‘problem’.  The real problem is that people, in a relationship inversely proportional to age, do not consider software as something they should have to pay for – because they can obtain it for free, or because they need/want to use the software in question to participate in activities which require its capabilities and they cannot afford the often exorbitant cost at the end-user level.  People have become used to obtaining free software and media – DRM is cracked on the day of release (or before on occasion) nearly all of the time.  People have come to expect software to be free.  To the child of 2010 music has always been free, and they have never had to pay for software.  The coming generation, those who will grow up and become the next round of adults will have a fundementally different view of their ‘entitlement’ of free software / media.  What those industries must ask themselves is do they choose to muster all their resources in an attempt to halt what has already happened and become integrated into culture or do they accept it and attempt to determine what the next big thing will be / other ways to make money?

When you were a child you likely did not have a web enabled smartphone in primary school, or carts of laptops / tablets being given out to classes of 5 year olds to get them familiar with tech early on.  Much as every other generation, due to pace of advancement, these people will have very different expectations of how a human should interact with techology – and yes, some of this will come from what they do in their leisure time – they will want menu systems that they can sweep through with a dismissive flip of the wrist (hell so do I).  They will expect to be able to talk at a machine and have it do a damned sight better than dragon dictate in the 1990s.  They will not want to use a remote control – such arcane trappings are of ancient and limiting HID design and world – perhaps they might be kept hollowed out as umm.. glasses cases? for novelty / retro sentiment but little more besides.

Adults amongst us in the IT industry can likely easily see this in themselves – how well can you write with your hands?  Me, I can barely scrawl for more than 10 minutes without discomfort through pain from atrophied muscle and movements which have become unknown after many years of non-use.  We moved from the interface of pushing pen on paper to striking a keyboard instead and going back is as infeasible as it is undesirable.

Facial Recognition Logon

MS isn’t going to be missing out on this… they’re taking what must be a plethora of data and using xbox and games to better their software.  To the left you see a picture of a leaked set of slides concerning Windows 8.  MS’s plan in this regard is to be able to have a PC determine if someone enters a room, turn itself on at a gesture and if it reaches a certainty level about the user allow for biometric logon where a user is identified visually and then must match another factor – voice as an example and also available in the kinect unit / software.  Clearly the experience would not halt after logon – it is simple and intuitive to be able to reach into the air and affect the deskop by picking up thing by simply tightening the shape of the hand around where you point at it.

This, and versions and vendor wars after it, are something that will fundamentally change the manner by which we interact with machines.  and the most experienced users, who will encounter the least culture shock moving to such a system, will be the gamers – the ones that have used such a system for years before it hit the business market.  These users will demand that their computers be able to interact with them in a peripheral free manner.

From the development perspective it is important then to understand the core methodologies by which recognition and tracking take place.  Interesting from a biological standpoint is the fact that MS must have constructed some form of anatomy of the human body and real time matching algorithms against it as a structure – it must be able to deal with 60 pound children to the 400 pound man who’s really hoping that get fit game will do the trick.

I see these things mounted on the top end of a tablet, providing a vertically casting surface when placed on a table that allows one to control the lights, change channels on a TV, etc etc. simply by intersecting the field of its vision in particular patterns.  I see advertising corporations deploying such technology  to be able to pull up a passing persons transaction history in a city center and providing data of those patterns correlated across a crowd to shop subscribers who might then decide to initiate a micro sale on particular items a prominent demographic might desire from one hour to the next.  Basically having a much better picture of demand and then turning that information into a commodity very rapidly.

I have to wonder what apple’s answer to this will be – better be good because if its anything like the console market right now Sony and Nintendo just got left 5 years research behind by something which is truly a step forward in human interface technology.

MS plans to spend 1B$ advertising kinect – you’ll definitely hear about it if you’ve not already.

The barrier and gateway to our evolution may well be our ability to interface directly with machines.  This appears to be a step in the right direction as peripherals have been eliminated.