Generative Soundscape 0.2.1 – IR Comm & Circuit Prototyping

This was an iterative attempt towards a generative soundscape installation. I created everything from UX Design, concepting, PCB design, hardware prototyping, hardware fabrication and ensemble, testing and validating

Deliverables: Hardware prototypes (PCB modules)

Tools: Arduino, Eagle CAD, Other Mill

Brief

After a failed attempt of creating modules that would communicate through sound, I started looking into Infrared Communication.

This is a project that looks to experiment with Infrared Communication between various Modules. All modules come from the same design made in Eagle CAD. It is a through-hole board routed with The Other Mill. The overall board involves an embedded Arduino (ATMega328), 2 IR-Rx, 3 IR-Tx, a LED and a Push-Button.

Modules can transfer the code from any IR Remote and transfer it among their closest peers. 

Next Steps

Evaluate power consumption to figure how can they be powered through batteries.

 

 

 

Previous Iterations

Solar Data Logger

Concept

What could be a way to log ITP's entrance and see the difference between the elevators' use and the stairs'? Through a solar powered DIY Arduino, we decided to visualize this data (and store it in a .csv table) in the screen between the elevators at ITP's entrance.

Development

After creating a DIY Arduino that could be powered through solar energy, by following Kina's tutorial we were able to set a basic solar rig that would charge the 3.7V and 1200 mA LiPo battery. We connected the solar panels in series and ended up with an open circuit voltage of 13V. Our current readings however, were of 4 mA.

We hooked the Arduino data to a Processing sketch that would overwrite the table data of a .csv file every second. All of the code can be found in this link.

ReSounding the City

This is a performance made thanks to the Graduate Student Organization (GSO) Grant at NYU's Tisch School of the Arts. In collaboration with Daniela Tenhamm-Tejos, Jana L. Pickart and Ansh Pattel we explored body language and psycho-geography in urban spaces. I was the developer behind the Gesture Recognition code. For the official website please visit this link.

With this project I created a series of visual effects that responded to the performer's choreography and the poet's voice and audience interaction. These effects were created in the C++ toolkit known as OpenFrameworks. Here is a sneak-peak of these effects

Performer and Tech Tryout

All Developed Effects

Health Applications –Pain Tracking–

We chose two mobile applications that ideally will help patients collect meaningful information about their symptoms and share them with their doctors in way that they can emit better recommendations. Thus, we looked at three overall assets in the applications: first that the use of these apps don't generate an additional frustration over their health, second that what they are registering can be is easily inputed and third that what's being registered could be useful for the doctor. After some research in the abundant alternatives of applications, we chose RheumaTrack and Pain Coach, even though we discarded Track React and Catch My Pain. Overall, we sought the best ones to ultimately decide which of the two was better. Its fair to say that both have useful and usable affordances, but RheumaTrack does add aggregate value that Pain Coach doesn't. 

Overall we realized RheumaTrack is a better application because of one particular service or function, which is the way people input their joint pain. This interface in a nutshell is a meaningful (useful & usable) way for both patient and doctor of visualize and recording the pain condition in a really predictable manner. The overall process of adding a new entry (pain, medication and activity), though a bit clamped is clearer than others and pretty straightforward. This dashboard follows the conventional standards in regards of Mobile GUI design, where items and affordances are perceivable (easily readable) and predictable, and the overall navigation feedback. I could realize two simple UX elements that this could improve, which is whenever adding a "New Check" there's no progress bar to predict how long is this task going to take. The "Activity" interface could visually improve in various points . First, generating better contrast between the data recorded and the layers of pain intensity to enhance perceivability (readability) and the tags' date-format can be confusing. Nevertheless, the overall purpose of the "Activity" service or function is very useful for doctors.

Object Reflections

This backpack caught my sight immediately and I’ve carried it since –eight years ago–. An outer clean minimalist silhouette tainted with coal and dark black communicated elegant simplicity. The continuity from the lateral-surrounding body-fabric onto the handles reinforced this minimalist perception and added structure and endurance. Its outer simplicity up to date disguises its inner complexity of vast services, to the extent of pockets often passing unnoticed. Various adventurous stories with its rogue laptop compartment have crafted a valueless feeling in my mind. I’m still discovering alternate uses for the side and handle straps such as water bottle holder, umbrella drainer or pen/marker holders. And besides its impeccable impermeability this awesome backpack is awfully comfortable.

This other object keeps tormenting our daily experiences, even though there have been solutions crafted by now. In a nut shell, this control frustrates people by cognitively loading us with excessive affordances (buttons). It's fair to clarify that the tasks all these affordances tackle may address interesting user needs. However, the frequency at which these needs may arise don't make up for this cognitive load. For example, as a beginner user, I don't know what are the A,B,C and D buttons for. Even though they may not be significantly big in comparison to other buttons, the fact that they have color distracts the overall reading from the control layout. A good solution already in market is Apple TV's control. It's consistent with its laptop controls created back in the mid 00s, allowing people to learn it easily and fast.

Tangible Retail Display

Images taken from their Blog Post

Images taken from their Blog Post

After a lot of searching and looking around, I stumble upon a company that creates interactive products for commercial scenarios beyond tactile interfaces onto tangible ones. The interactive product is triggered by lifting one of the products sold in the store, to expose an album of first-person stories around diverse brand’s products. Even though it sets an innovative consumer experience, after half an hour of waiting for someone to comply, I finally decided to take it for a spin. The product is a sealed black box, with what I imagine is a projector, a computer and a camera. The main idea behind it is to transform any surface into an interactive tangible user interface. Basically this is a usable interactive experience with catchy stories behind a tracking framework.

The fact this product is interfacing with real tangible artifacts does set an entire realm of possibilities, even though it was only used for triggering a strictly tactile command interface. This tactile-2D-interface had the proper affordances to easily manipulate the experience. Its results could easily be noticed when navigating and selecting different features, and because it was built on top of the tactile interface paradigm, it was really easy to learn how to use it. However, it lacked the first principle of interaction design, it wasn’t perceivable as an interactive display at first sight. Not really sure why, but its call to action –or its lack of– left clients adrift. Even still when the product had a blinking text prompt of 1/10 of the display’s height –more less– for inviting people to interact –"Please lift to read the stories"–, the overall idea of how to start the interactive experience wasn’t overly persuasive. Maybe, given to the fact that it resembled a light-display-installation that you’re not supposed to touch kind-of imaginary scenario, but not 100% certain. Overall the 5 minute experience was entertaining.

The hypothesis I had before approaching the product was that this interface should aim for what Norman calls affective approach, considering the context and goal are for retail purposes, it is not a scenario that requires a serious concentrated effort reach its goal. In these order of ideas, the product balances beauty and usability fairly well, where easy-going use and contemplation are conveyed.

 

NUI BCI Study #1 "Mindwave"

 

Through this first exploration of interfacing Neurosky's Mindwave I've learned a couple things around EEG and Processing. The current library I'm working with is called Thinkgear, which allows to read different signals (low and high values for alpha, beta and gamma, and delta and theta signals, plus a blinking signal). Besides the annoying bluetooth pairing, this consumer interface is still in the making and Processing's latency doesn't make it easier for user feedback. I'm sure there's better ways of interfacing this to optimize user feedback –other software–, and there should be better consumer EEG devices out there. Nonetheless it has been a thrilling experience to better understand the sine and cosine functions, arrays and libraries. Here's my second draft I've crafted with this curious Natural Brain Computer Interface. The code for this UI Draft can be found in this Github Repo.