How to:

A truly humane device

Humane just unveiled their AI pin. And it could really change everything.

Years ago, I applied to get Google Glass.

I was working at Quirky at the time and I clearly remember one of my coworkers, Jeff Bartenbach, telling me how he thought that Glass was the wrong form factor.

“People will never adopt wearing something on their head,” he said. “You look weird and nobody wants a camera staring at them all the time.”

I’m paraphrasing—he definitely said it nicer than that.

It was the heyday of wearable computing. Only a few months later, Apple would announce the Apple Watch. When I next saw Jeff at a holiday party, he basically said I told you so. And it’s true—Google Glass (in its consumer form at least) shut down not long after, to live on in an enterprise version designed for assembly-line workers, only to then have that also killed off just earlier this year.

Since then, there’s been a true explosion of wearables. While there were wearables before Apple Watch, three key factors make today’s wearable devices far different than they were in those days:

  1. Apple Watch made wearable devices fashionable accessories (at least to some).
  2. Apple Watch did a ton to educate the consumer. Even more impressively, they seem to have done this particularly well among those you might not stereotype as tech early adopters, like the elderly, who I think buy it for its strong health capabilities.
  3. There were technological developments—a critical mass that was reached—that made wearables make sense: better processors, which is to say, better batteries and power management, better sensing, and the true maturity of phones as an interface for separate hardware devices (nobody wants to navigate menus on a Fitbit’s tiny screen—apps solved the software layer.)

But then things seem to have stalled a bit. Wearables. in their current imagination, got a bit mature, with each Apple Watch update a bit incremental (though that may not continue since Apple reportedly making progress on really incredible sci-fi-esque technical problems in sensing, like detecting blood glucose without puncturing the skin).

A reason for that is that we’ve hit a new technological boundary. One of the key points of wearables is that that they blend in to daily life: one of the key selling points of the Apple Watch (or even Google Glass) was that you wouldn’t need to look at your phone all the time. In this way they’re a sort of pseudo-cybernetic interface—a device that furthers the technologist’s dream of a human-machine hybrid, the ultimate HCI.

One way that wearables did this is by blending in with you and the world—smaller devices bolted-on to your wrist or face or finger become part of you. But the other way they do this is by bringing you and the world into it: it detects your body’s state, location, ambient noise level, etc.—nearly everything to deeply understand the context it’s in.

Three problems, however, have consistently gotten in the way of going even further toward creating this omniscient cyborg device:

  1. Such devices lack arguably the most important sense they could have: vision (except for Glass, I guess, but not really). There’s no camera.
  2. Even with perfect sensing, these devices aren’t very smart in how they process their context. Personal assistants like Siri and Google Assistant seem to get less intelligent by the day, even compared to what they once did. And humans have to do the majority of the work in processing and reformatting their intent and their environment’s signals to a format the machine can parse.
  3. These devices are either still just versions of screens or worse, they rely on your phone’s. They have no other way to output their data—a whole half of the I/O loop—except for voice responses which, as discussed, they’re not smart enough to do well.

Then along comes the Humane AI pin.

I’ve been following Humane for a while now and to be honest, have been a bit skeptical. They seemed like a ton of hype with not a lot to show for it for a long time. Founded in 2019, by two senior Apple execs, Imran Chaudhri (formerly Director of Design for the Human Interface Group for nearly every Apple platform, now President of Humane) and Bethany Bongiorno (former Software Engineering Director, now CEO), they've been very secretive about what they've been working on. But as the saying goes, hardware is hard, especially when you're starting right as COVID is taking off.

That is, until earlier this year when Imran demo'ed part of the product on a TED stage:

But now their product is finally here, and like all great products, it seems to have come at the perfect time, right at the intersection of hardware and software developments that make it possible.

I haven’t used the device, nor even seen one in person, so all I can comment on is the launch video, but wow, it does seem pretty stunning:

Lots of people have been commenting on the primary interaction mechanism: push-to-talk. It's really like a radio, being positioned on your chest like this. I'm really surprised to see how much negativity there is around this PTT interaction in particular. Doesn't seem that hard.

The device solves several problems at once in a pretty gorgeous, appropriately-priced (at $699), and eminently adoptable pin or clip form factor:

  1. Output: a monochrome laser projector projects information and an interface onto your palm, finally bypassing the need for a screen.
  2. Contextual awareness: It has a camera. That doesn’t seem super awkward to be pointing at people. Cause it’s not on your face. That’s huge. Like check out that part of the demo where he asks how much protein are in the almonds in his hand. Wow.
  3. Understanding: AI is now so good that you can feed the messy data of life into the system naturally. It meets you where you—and the world—is.

Moreover, and perhaps even more excitingly, this is a device that has quite literally put design and humanity—what many have increasingly called “humane computing”—at the center of its concept. There’s something very radical here that I think people aren’t appreciating enough; how big of a deal it is when the very name of the company is a design objective.

It’s a breath of fresh air and I can’t wait to try it. And it’s yet another touchpoint in an exciting time for spatial computing.

Ideas

A truly humane device

Years ago, I applied to get Google Glass.

I was working at Quirky at the time and I clearly remember one of my coworkers, Jeff Bartenbach, telling me how he thought that Glass was the wrong form factor.

“People will never adopt wearing something on their head,” he said. “You look weird and nobody wants a camera staring at them all the time.”

I’m paraphrasing—he definitely said it nicer than that.

It was the heyday of wearable computing. Only a few months later, Apple would announce the Apple Watch. When I next saw Jeff at a holiday party, he basically said I told you so. And it’s true—Google Glass (in its consumer form at least) shut down not long after, to live on in an enterprise version designed for assembly-line workers, only to then have that also killed off just earlier this year.

Since then, there’s been a true explosion of wearables. While there were wearables before Apple Watch, three key factors make today’s wearable devices far different than they were in those days:

  1. Apple Watch made wearable devices fashionable accessories (at least to some).
  2. Apple Watch did a ton to educate the consumer. Even more impressively, they seem to have done this particularly well among those you might not stereotype as tech early adopters, like the elderly, who I think buy it for its strong health capabilities.
  3. There were technological developments—a critical mass that was reached—that made wearables make sense: better processors, which is to say, better batteries and power management, better sensing, and the true maturity of phones as an interface for separate hardware devices (nobody wants to navigate menus on a Fitbit’s tiny screen—apps solved the software layer.)

But then things seem to have stalled a bit. Wearables. in their current imagination, got a bit mature, with each Apple Watch update a bit incremental (though that may not continue since Apple reportedly making progress on really incredible sci-fi-esque technical problems in sensing, like detecting blood glucose without puncturing the skin).

A reason for that is that we’ve hit a new technological boundary. One of the key points of wearables is that that they blend in to daily life: one of the key selling points of the Apple Watch (or even Google Glass) was that you wouldn’t need to look at your phone all the time. In this way they’re a sort of pseudo-cybernetic interface—a device that furthers the technologist’s dream of a human-machine hybrid, the ultimate HCI.

One way that wearables did this is by blending in with you and the world—smaller devices bolted-on to your wrist or face or finger become part of you. But the other way they do this is by bringing you and the world into it: it detects your body’s state, location, ambient noise level, etc.—nearly everything to deeply understand the context it’s in.

Three problems, however, have consistently gotten in the way of going even further toward creating this omniscient cyborg device:

  1. Such devices lack arguably the most important sense they could have: vision (except for Glass, I guess, but not really). There’s no camera.
  2. Even with perfect sensing, these devices aren’t very smart in how they process their context. Personal assistants like Siri and Google Assistant seem to get less intelligent by the day, even compared to what they once did. And humans have to do the majority of the work in processing and reformatting their intent and their environment’s signals to a format the machine can parse.
  3. These devices are either still just versions of screens or worse, they rely on your phone’s. They have no other way to output their data—a whole half of the I/O loop—except for voice responses which, as discussed, they’re not smart enough to do well.

Then along comes the Humane AI pin.

I’ve been following Humane for a while now and to be honest, have been a bit skeptical. They seemed like a ton of hype with not a lot to show for it for a long time. Founded in 2019, by two senior Apple execs, Imran Chaudhri (formerly Director of Design for the Human Interface Group for nearly every Apple platform, now President of Humane) and Bethany Bongiorno (former Software Engineering Director, now CEO), they've been very secretive about what they've been working on. But as the saying goes, hardware is hard, especially when you're starting right as COVID is taking off.

That is, until earlier this year when Imran demo'ed part of the product on a TED stage:

But now their product is finally here, and like all great products, it seems to have come at the perfect time, right at the intersection of hardware and software developments that make it possible.

I haven’t used the device, nor even seen one in person, so all I can comment on is the launch video, but wow, it does seem pretty stunning:

Lots of people have been commenting on the primary interaction mechanism: push-to-talk. It's really like a radio, being positioned on your chest like this. I'm really surprised to see how much negativity there is around this PTT interaction in particular. Doesn't seem that hard.

The device solves several problems at once in a pretty gorgeous, appropriately-priced (at $699), and eminently adoptable pin or clip form factor:

  1. Output: a monochrome laser projector projects information and an interface onto your palm, finally bypassing the need for a screen.
  2. Contextual awareness: It has a camera. That doesn’t seem super awkward to be pointing at people. Cause it’s not on your face. That’s huge. Like check out that part of the demo where he asks how much protein are in the almonds in his hand. Wow.
  3. Understanding: AI is now so good that you can feed the messy data of life into the system naturally. It meets you where you—and the world—is.

Moreover, and perhaps even more excitingly, this is a device that has quite literally put design and humanity—what many have increasingly called “humane computing”—at the center of its concept. There’s something very radical here that I think people aren’t appreciating enough; how big of a deal it is when the very name of the company is a design objective.

It’s a breath of fresh air and I can’t wait to try it. And it’s yet another touchpoint in an exciting time for spatial computing.

Updated continuously • Last edited on
1.29.24
Ideas

A truly humane device

Years ago, I applied to get Google Glass.

I was working at Quirky at the time and I clearly remember one of my coworkers, Jeff Bartenbach, telling me how he thought that Glass was the wrong form factor.

“People will never adopt wearing something on their head,” he said. “You look weird and nobody wants a camera staring at them all the time.”

I’m paraphrasing—he definitely said it nicer than that.

It was the heyday of wearable computing. Only a few months later, Apple would announce the Apple Watch. When I next saw Jeff at a holiday party, he basically said I told you so. And it’s true—Google Glass (in its consumer form at least) shut down not long after, to live on in an enterprise version designed for assembly-line workers, only to then have that also killed off just earlier this year.

Since then, there’s been a true explosion of wearables. While there were wearables before Apple Watch, three key factors make today’s wearable devices far different than they were in those days:

  1. Apple Watch made wearable devices fashionable accessories (at least to some).
  2. Apple Watch did a ton to educate the consumer. Even more impressively, they seem to have done this particularly well among those you might not stereotype as tech early adopters, like the elderly, who I think buy it for its strong health capabilities.
  3. There were technological developments—a critical mass that was reached—that made wearables make sense: better processors, which is to say, better batteries and power management, better sensing, and the true maturity of phones as an interface for separate hardware devices (nobody wants to navigate menus on a Fitbit’s tiny screen—apps solved the software layer.)

But then things seem to have stalled a bit. Wearables. in their current imagination, got a bit mature, with each Apple Watch update a bit incremental (though that may not continue since Apple reportedly making progress on really incredible sci-fi-esque technical problems in sensing, like detecting blood glucose without puncturing the skin).

A reason for that is that we’ve hit a new technological boundary. One of the key points of wearables is that that they blend in to daily life: one of the key selling points of the Apple Watch (or even Google Glass) was that you wouldn’t need to look at your phone all the time. In this way they’re a sort of pseudo-cybernetic interface—a device that furthers the technologist’s dream of a human-machine hybrid, the ultimate HCI.

One way that wearables did this is by blending in with you and the world—smaller devices bolted-on to your wrist or face or finger become part of you. But the other way they do this is by bringing you and the world into it: it detects your body’s state, location, ambient noise level, etc.—nearly everything to deeply understand the context it’s in.

Three problems, however, have consistently gotten in the way of going even further toward creating this omniscient cyborg device:

  1. Such devices lack arguably the most important sense they could have: vision (except for Glass, I guess, but not really). There’s no camera.
  2. Even with perfect sensing, these devices aren’t very smart in how they process their context. Personal assistants like Siri and Google Assistant seem to get less intelligent by the day, even compared to what they once did. And humans have to do the majority of the work in processing and reformatting their intent and their environment’s signals to a format the machine can parse.
  3. These devices are either still just versions of screens or worse, they rely on your phone’s. They have no other way to output their data—a whole half of the I/O loop—except for voice responses which, as discussed, they’re not smart enough to do well.

Then along comes the Humane AI pin.

I’ve been following Humane for a while now and to be honest, have been a bit skeptical. They seemed like a ton of hype with not a lot to show for it for a long time. Founded in 2019, by two senior Apple execs, Imran Chaudhri (formerly Director of Design for the Human Interface Group for nearly every Apple platform, now President of Humane) and Bethany Bongiorno (former Software Engineering Director, now CEO), they've been very secretive about what they've been working on. But as the saying goes, hardware is hard, especially when you're starting right as COVID is taking off.

That is, until earlier this year when Imran demo'ed part of the product on a TED stage:

But now their product is finally here, and like all great products, it seems to have come at the perfect time, right at the intersection of hardware and software developments that make it possible.

I haven’t used the device, nor even seen one in person, so all I can comment on is the launch video, but wow, it does seem pretty stunning:

Lots of people have been commenting on the primary interaction mechanism: push-to-talk. It's really like a radio, being positioned on your chest like this. I'm really surprised to see how much negativity there is around this PTT interaction in particular. Doesn't seem that hard.

The device solves several problems at once in a pretty gorgeous, appropriately-priced (at $699), and eminently adoptable pin or clip form factor:

  1. Output: a monochrome laser projector projects information and an interface onto your palm, finally bypassing the need for a screen.
  2. Contextual awareness: It has a camera. That doesn’t seem super awkward to be pointing at people. Cause it’s not on your face. That’s huge. Like check out that part of the demo where he asks how much protein are in the almonds in his hand. Wow.
  3. Understanding: AI is now so good that you can feed the messy data of life into the system naturally. It meets you where you—and the world—is.

Moreover, and perhaps even more excitingly, this is a device that has quite literally put design and humanity—what many have increasingly called “humane computing”—at the center of its concept. There’s something very radical here that I think people aren’t appreciating enough; how big of a deal it is when the very name of the company is a design objective.

It’s a breath of fresh air and I can’t wait to try it. And it’s yet another touchpoint in an exciting time for spatial computing.

Read more