Organization Tools - Version 2

Compact, navigable org chart


After writing the first version of organizational tools I wanted to see how I could display the org chart in a more compact and impactful way.

The traditional view of an org chart, one with a tree structure that builds out horizontally, is typically only good for manual creation and doesn't scale very well for even modestly sized organizations.

In this design exploration I looked at ways to build the chart such that it could display as many employees as possible and be navigated easily.

The result is the layout above that uses space as efficiently as possible while still showing every employee. It also displays a mini chart in the left hand corner for quick navigation to a particular manager's organization.


The chart's data was collected in the same way as the previous version, using Ajax calls to the enterprise server.

The chart display is now created using XSLT to transform the data from the server, and to filter the tree when the user performs a faceted browse operation.

Audio Accessibility

What would an alert look like for the deaf?


This design exploration asked the question "What would audio alerts look like on a computer when the user cannot hear them?" Most operating systems have an accessibility option that allows a user to see that an alert was issued, but most don't differentiate between the types of alert.


The solution presented allows users to associate graphics or waveforms with alerts, along with their titles, in a visual form in the lower part of the user's screen.

When active, this solution would hook into the OS and intercept alerts when they are issued and display the alerts according to user preference.

In the case where a user has not predefined images that associate with certain alerts, the system will show a waveform of the sound so that the user, over time, can become familiar with it's shape and thus infer the message it's trying to convey.

The mock-up images were produced in Adobe Photoshop and Illustrator and were later used in an application for patent protection.

VOIP Timeshifting

How could a user control a VOIP stream?


This design exploration asks the question "How could a user control a VOIP stream to handle a phone call most efficiently?"

Inspired by Tivo, we looked at ways that the user could timeshift the conversation to catch parts that he had missed. This could be particularly useful on conference calls where you're not always actively talking or listening.


In addition to manually timeshifiting the voice stream, we considered automatic triggers which would initiate a time shift. For instance, if your name is mentioned and there is a period of silence (e.g. someone on the call says "what do you think about that, Eric?) then the application would shift back 15 seconds and play back the conversation at 2x speed to help you catch up.

Controls were also designed to allow the playback stream to be presented "on top of" (louder than) the live stream so that the user could pay attention to both.

The mock-up images where produced in Adobe Photoshop and Illustrator and were later used in an application for patent protection

Egocentric network addressing tool

What would a new addressing control look like?


In this design exploration, we asked the question "If they've built an online social network, what kind of novel ways could people address their contacts?"

Assuming you've build an online presence with sites that allow you to list your contacts and friends a system can produce an ego-centric sociogram (a picture of all the people you know, how well you know them as a graph with you at the center) for you.

This graph will look like the picture above, where you are in the center and your friends radiate out from the center, with their distance being proportional to how often you communicate (or any other metric that has been defined)


Based on this type of sociogram we designed a UI control that would allow users to select the "radius of influence" as a way to capture the people closest to them, with varying degree of inclusion.

This control could be used to address an email or provide access to content. The measure of distance could be modified based on the type of communication. E.g. the distance could measure how long you've known someone, and the control could be used to invite people to a class reunion, or anniversary.

The mock-up images where produced in Adobe Photoshop and Illustrator and were later used in an application for patent protection

Document-like Voice Recording Exploration

How could voice recording feel like word processing?


In this design exploration we asked the question "How could a voice recording feel like generating a document in a word processor?"

In response to the proliferation of individual podcasts, we wanted to see how we might make the process of creating an audio recording feel more familiar to the average user.


We focused on two particular aspects of document creation that are very familiar to word processing tools: pagination, and styling.

For pagination we designed an interface that showed the waveform in it's familiar style, but wrapped the waveform at a specified point such that it wrapped and grew vertically like lines of text in a document. We also allowed users to indicate by the "enter" key that they were starting a "new paragraph".

For styling we carried over the familiar concepts of "bolding", "italicizing" etc into the voice recording. A possible implementation of bolding in a voice recording might mean making the voice louder or playing bold backgroud music.

The mock-up images where produced in Adobe Photoshop and Illustrator and were later used in an application for patent protection.

Context Enhanced Instant Messaging System

How could we further personalize the IM experience?


During a design exploration of how to further enhance the instant messaging experience the I came up with an interface for presenting supporting content (photos, sounds, links, etc) during an instant messaging exchange.

The supporting content presented to the user based on (1) the users involved in the chat and (2) the content of the chat.


In a proposed implementation of this design (patented as US 7,503,007) the users would initiate an IM session and begin a conversation. As the conversation progresses an intelligent system scans for keywords, which it then uses to search local content for relevance.

When relevant content is found it is presented to the user as an additional part of the IM window. The user can choose to add this additional content into the chat if they choose.

User-Aware Smart Menu

How can we prompt healthful eating choices?


In this design exploration we asked "How could we use technology to prompt healthful eating?"

We noticed new online services that were collecting information about the nutrition in foods and allowing people to log what they've eaten, along with existing online services that allowed people to rate restaurants and yet other online services that stored people's health information.

We considered one place to aggregate this information in context was the menu itself.


The proposed implementation is a menu made of digital ink which displays the dishes being offered by the restaurant and connects to the network for more information.

The menu allows the user to log in and retrieves his personal information. It compares his nutrition goals with the amount of food he's already logged as eaten today, and shows what his allotment is for the rest of the day.

It takes his health conditions into account and potentially disables menu options for him that are too risky, and highlights others that are among the best choices.

It contains a section that displays reviews which can prompt him to make choices based on the community's review of certain dishes.