Access to the latest info and tech is easy because of the internet now, but moving to NYC and going to an art/tech school in Manhattan (NYU-ITP) has pushed me even closer to the sources of ground-breaking stuff that eddies in github repos, IRC channels, and school projects before being cut loose to hackernews or reddit or the other nerd aggregators.
Here’s some shit that I’ve seen while at ITP that I thought was fucking awesome, not necessarily because it’s never been done before, but primarily because it’s so easy for any regular person to play with now.
Interactive Coding
From Dan Shiffman in his Nature of Code class, he passed along this link from Bret Victor‘s talk, “Inventing on Principle”. In the lengthy vid (all of it worth watching), he shows real-time feedback for coding. That is, if you change the logic in your code, you see how the variables change, in real-time:
Click here to view the embedded video.
He extended this to showing a circuit diagram timeline with sync’d waveforms and how the electrons and flow through the circuit as you change resistance and parts:
Click here to view the embedded video.
Then, he showed iPad swiping to change elements on an animation timeline, thus creating a sort of tangible real-time animation experimentation, seen below:
Click here to view the embedded video.
The benefit of this immediate feedback is that one can begin to play. Instead of usual software development, which consists of planning ahead of time exactly how something should behave, and developing contingencies for when things go unexpectedly, this sort of format allows for someone to play with variables, such as the size of a character’s head, or the physics of a world, with immediate results. This allows someone to fine-tune a world, or to test the bounds to see if unexpectedly fun behavior emerges from it.
This is more consistent with how an artist might try multiple things in order to fully flesh out a concept, instead of hoping to get lucky with it, or spending ages understanding the mechanics so well that the result is contrived.
Here’s the full version of Victor’s talk:
Click here to view the embedded video.
You can see an early application of this kind of coding mentality in, as an example, this indie game, Under the Ocean:
Click here to view the embedded video.
Websockets
One of my classmates, now an ITP resident, Craig Protzel, was showing me some code he was working on with a professor, linking up heartbeat monitor-type data with a data viz timeseries graphing web site, showing real-time streaming of the data onto a line graph, with a backend of node.js and socket.io.
The best demo I could find of something similar to this is this Arduino board with two potentiometers streaming output via websockets and a Python script up to the web:
Click here to view the embedded video.
Here’s the implication. The web has been fairly static since its inception. Even when AJAX came and ushered in web 2.0, you were still doing with an active getter-type web. Databases, bandwidth, and client browsers just couldn’t handle unrequested data coming in. But now they can, what with the cloud, key-value stores like redis or AMQP, faster and bigger bandwidth pipes, etc. The web is going to start looking more like a stream and less like a restaurant menu.
In my Redial class last semester, a lot of our final projects involved setting up an open-source Asterisk telephony server with a cheap phone number routed to an Ubuntu server instance in the cloud — that stack was all the same, but our applications were different: one team (Phil and Robbie) made a super-easy conference call service, another dude (Tony) made a multiplayer sequencer controlled by people dialing in and punching numbers on their keypads:
Click here to view the embedded video.
Server stacks are flattening in a sense — you can use any language you want to set up a server (Ruby Sinatra/Rails, JavaScript node.js, Python Flask), and then plug in extra services you need (database, key-store, admin tools, task and load balancers). HTML5 and some degree of normalization on the browser side is allowing JavaScript to mature so that we have all these kickass visualization and interface libraries for making better user interfaces now, too, which can easily handle the structured data being thrown at it by all the stuff going on on the backend.
The last note is that you can now easily, with a little help from an Arduino and a network shield, control the digital world with analog sensors, or vice versa.
Drones
I read Daniel Suarez’s latest book “Kill Decision” (review here), in which a scientist is targeted because of her research into weaver ants, the most warlike species on Earth, being inspiration for algorithms for killer drone swarming behavior. We’re living in an age of the dawn of drones, where the US has found it can cheaply deploy drones to kill and monitor the enemy, broken down into a bureaucracy of various levels of kill and targeting authorization. We’ve all seen the videos of quadcopters acting together using simple rules. It won’t be too long until law enforcement and federal agencies will be able to use drones domestically. Drones are far more versatile, expendable, and cost-effective as overhead imagery. Look at the quality on this RC with a GoPro camera attached:
Click here to view the embedded video.
Here was a “robokopter” sent up to monitor the police in Warsaw as they kept two hostile groups of marchers apart:
Click here to view the embedded video.
I can’t see how drones won’t be banned soon, but how will they enforce it? Shoot down rogue drones? Jam them?
Somewhat corny, but here’s FPSRussia using what seems to be a terribly unsafe quadcopter with a machine gun attached:
Click here to view the embedded video.
Suarez’s book takes its name from the idea that an unknown actor has built drones that target individuals, assassinate them, and self-destruct, without any instructions coming in after they’re released. They destroy their own fingerprint and are mostly impervious to being jammed or tracked back to their makers.
All with fairly cheap parts. It’s not the same thing as weaponizing, say, biological weapons, or building a big enough EMP, which I would imagine are two of security apparatuses’ biggest fears. I’m kind of sad more people haven’t read Suarez’s books because they’re addressing pretty near-term implications of emerging tech.
FaceShift
I didn’t get to take this class, but Professor Kyle McDonald (whom I took for Glitch) had his students play with this new kickass software, FaceShift. Check the demo:
Click here to view the embedded video.
Basically, you plug in a Kinect to your computer and spend half an hour mapping your facial expressions to the software. Then it renders a model of your face, which you can then use to map any skin you want (say, another person’s face) onto it. So quickly and easily you can do this. Previously this sort of work was the domain of special effects studios and game development teams. Now it’s downloadable, and you can buy a Kinect (or other similar cameras) to capture yourself.
Holy Grails That are Still Missing
Self-Recording
Here’s what I wrote my buddy Chris after he showed me this upcoming product, the Autographer:
Problem with it is threefold:
- unproven, not much evidence of what it actually delivers
- angle is all wrong, you have to wear it on your purse (!) or on a lanyard, so it will jerk around, not stay right-side out, won’t have a good angle to see what’s important (even if it’s supposedly a 135 degree lens)
- the holy grail of something like this would be something that takes photos OF you, not FROM you
It also means that photos of people and recording peoples’ lives has been primarily a solo adventure at this point. Hence the phenomenon of mirror photos, the forward-facing camera (so you can see yourself and the person you’re with while you take the photo), etc. Not so many people are lucky enough to A) want photos and B) have someone else who loves to take photos along with them at that time. I have no awesome photos of Iraq as a result (and those that I took, as you know, got me in a shitload of photos…can’t even claim to have returned with anything beautiful from that hard-knocks lesson). Maybe this also explains why photos of animals have done so well. They are ignorant of us taking photos of them, and do their crazy animalistic shit with reckless abandon, and thus make excellent photo subjects.
Extend that to personal data collection (which is what Galapag.us will start off as) and it’s a somewhat isolating experience. Who is going to follow you around and collect data on you? You have to do it for yourself, or participate in activities that can be tracked automatically (marathons, online social networking sites, Fuelband, etc.). Maybe social media whores (like me) became that way because that’s the cutting edge for living a quantified, recorded life.
Anyway, I think it’s pretty fascinating that among friends who are social users of online stuff, Facebook and Instagram are the key players (which is why Facebook paid $1bil for Instagram). I love Twitter but very few casual users use it. Pinterest is primarily women, fantasy leagues are men, etc. But photos are HUGE. Facebook knows it. But we’re not even a third of the way towards reaching the full potential of capturing the human experience through a camera in my opinion. The tech is not there yet.
Decentralization
The web is not very distributed or decentralized. There is a myth of digital democracy. When Amazon AWS has a hiccup, usually in its Virginia availability zone, half the American internet’s most popular sites go down. The NSA and other countries’ intelligence agencies are up the telcos’ asses with eavesdropping, and sites are being shut down by ICE and the FBI. Virtually the only site that has remained impervious to government attacks has been The Pirate Bay, which is constantly coming up with new ways to thwart efforts to shut it down, primarily through redundancy and distribution. Torrenting has almost become a political act, even though it’s a resilient model for an unevenly distributed modern-day internet.
Social networks are walled gardens. Twitter, a darling, was caught by the more traditional walled garden peloton, and is now locking down its data, after having, at one point, a role model API.
At some point we will have IPv6, which, thankfully after Windows has gone through some lengths to secure its OS, has been slow to roll out so far, but which will eventually allow any sensor, device, appliance, whatever to have its own internet-available unique ID, for better or for worse.