Why I’m Cutting Down on Artificial Sweeteners

Why I’m Cutting Down on Artificial Sweeteners

I may have been wrong about artificial sweeteners. I’ve always been a big fan of them, because I love stuffing sweet things in my mouth, but also try to keep my daily calorie count somewhat reasonable.

Sweeteners also have an ideological draw — I love artificial things. I’m typing this on an artificial iPad, powered by artificial electricity, basking in artificial heat, under an artificial roof. So when I see people react with hostility to anything that isn’t “natural” (whatever that even means), I push back. You think artificial sweeteners are poison because Splenda packets aren’t plucked from the ground like potatoes? Well, then I’m gonna put Splenda on everything! I’ll sprinkle it on beef I don’t even care. Take that, hippy!

And in theory, my pettiness should be supported by science. If controlling weight is the goal, then all that matters is calories in and calories out, right? Sweeteners lead to fewer calories in, so they help control weight in a world where calories are frickin’ everywhere. That’s the theory.

The thing is, the best theory in the world is worthless without data. More and more data is coming out about artificial sweeteners, and the results often differ from what theory would predict.

1*RaoEkzh90PAlPMo8HiyTnQ@2x

Most data isn’t conclusive. When you look at the whole population, people who use artificial sweeteners tend to be overweight. That’s just a correlation — maybe bigger people are trying to lose weight with sweeteners. It’s not evidence that sweeteners don’t work, but it’s a lack of evidence that sweeteners do work.

True experiments, in which people changed their intake of artificial sweeteners, would be more definitive if they showed an effect. Here’s a recent meta-analysis reviewing studies on artificial sweeteners, including randomized controlled trials. The researchers concluded:

Evidence from [randomized controlled trials] does not clearly support the intended benefits of nonnutritive sweeteners for weight management.

So again, not evidence that they cause weight increases, or poison you, or have any negative effects. But also not evidence that they do have their intended effect: weight loss.

I’ve seen speculation and emerging research on why the theory doesn’t match up with the data. Some people think it’s a psychological thing — the classic “I had a Diet Coke, so I can order two Baconators instead of one” phenomenon. Some think it’s more biological, with sweeteners mixing up the critters in our guts so they suck at dealing with the calories we do consume.

Whatever the case, there’s simply a lack of evidence that artificial sweeteners help with weight loss, or have any other positive effects. When it comes to translating research into actual behavior, here’s where I’ve come down, personally, for now:

  • Artificial sweeteners won’t kill me, so I won’t avoid them. I’ll use up the packets and syrups that we have around the house.
  • But there’s no evidence that they’ll help me, either. That puts them on the same scientific level as any other bullshit health intervention, like eating organic food, fad diets, or acupuncture. I wouldn’t do those things, so why continue slurping down Splenda?
  • Therefore, I’ll reduce my intake of artificial sweeteners. I’ll use sugar when I need it, or better yet, just have fewer sweeter things overall. If I have the will power for that, it’ll almost certainly lead to fewer calories in, with no mysterious counteracting force.

That’s where I currently stand, but I’m a scientist, so I’ll keep updating my opinions and behaviour as new evidence comes out.

For now, I’m cutting down on sweet things …

1*lU8gkP3jF8ioNDTPs_6dAg@2x

… right after the holidays.

On the Political Correctness of “g”

I’m a scientist now. Specifically, a scientist at a neuroscience company. But not a neuroscientist.  I know, confusing, but the point is that I’ll probably be writing mostly about neuroscience at this blog now.

One project I’ve been working on involves intelligence. For decades, there has been a war between the idea that intelligence is one thing—i.e., there is a “g” factor that powers all intellectual feats—and the idea that intelligence is many things—i.e., there are several independent factors that power different intellectual feats.

I have no idea which is true. The data behind the debate is complicated, and I have a feeling it won’t be unambiguously interpreted until we have a better understanding of the physical workings of the brain. But one thing that I find fascinating is that the “g” idea is seen as the politically incorrect position.

Why? I suppose it’s because it simplifies people’s intellectual ability to a single number, which makes people uncomfortable. If your g factor is low, everything you can possibly do with your mind is held back. Which, scientifically speaking, sucks.

What I don’t understand is why adding more factors is more politically correct. Let’s say there are two independent factors underlying intelligence: g1 and g2. If you’re below average on g1, well, there’s still a 50% chance that you’re below average on g2 as well, which means your mind is still behind on every possible intellectual feat. But hey, any given person still has that 50% chance of not being bad at everything. Is that the difference between acceptable and offensive? A coin flip’s worth of hope?

It’s like the old argument for common ground (kinda) between atheists and religious folks: “I contend we are both atheists, I just believe in one fewer god than you do.” Similarly, little-g enthusiasts just believe in one (or more) fewer intelligence factors than their politically correct colleagues. They have even more common ground, because they agree that there is at least one measurable variation in intelligence. Is there really such a big difference between the positions? Can’t we all just get along?

I’m content to reserve judgement and follow the data in any direction, regardless of which direction is popular and deemed inoffensive for arbitrary reasons.

Could What You Do Without Thinking Be the Key to Artificial Intelligence?

My Master’s thesis explored the links between intuition and intelligence. I found that measures of intuition were closely related with intelligence: people who tend to rely on quick, unconscious decision making also tended to be more intelligent. When poking at the implications, I wrote:

The intuition-intelligence link also holds promise in the advancement of artificial intelligence (AI). Herbert Simon (see Frantz, 2003) has used AI as a framework for understanding intuition. However, this also works in the other direction. A greater understanding of intuition’s role in human intelligence can be translated to improved artificial intelligence. For example, Deep Blue, a chess-playing AI, was said to be incapable of intuition, while the computer’s opponent in a famous 1997 chess match, Garry Kasparov, was known for his intuitive style of play (IBM, 1997).  While it is difficult to describe a computer’s decision-making process as conscious or unconscious, the AI’s method does resemble analytic thought rather than intuitive thought as defined here. Deep Blue searched through all possible chess moves, step-by-step, in order to determine the best one. Kasparov, in contrast, had already intuitively narrowed down the choices to a small number of moves before consciously determining the most intelligent one. Considering that, according to IBM, Deep Blue could consider 200 000 000 chess positions per second, and Kasparov could only consider 3 positions per second, Kasparov’s unconscious intuitive processing must have been quite extensive in order to even compete with Deep Blue. Deep Blue’s lack of intuition did not seem to be an obstacle in that match (the AI won), but perhaps an approximation of human intuition would lead to even greater, more efficient intelligence in machines.

That was back in 2007, just a decade after Deep Blue beat Kasparov at chess. Here we are another decade later, and Google’s AlphaGo has beat a champion at the more complex game of Go.

I’m no expert on machine learning, but my understanding is that AlphaGo does not play in the same way as Deep Blue, which brute-forces the calculation of 200 000 000 positions per second. That’s the equivalent of conscious deliberation: considering every possibility, then choosing the best one. Intuition, however, relies on non-conscious calculations. Most possibilities have already been ruled out when an intuitive decisions enters consciousness, which is why intuition can seem like magic to the conscious minds experiencing it.

Intuition seems closer to how AlphaGo works. By studying millions of human Go moves, then playing against itself to become better than human (creepy), it learns patterns. When playing a game, instead of flipping through every possible move, it has already narrowed down the possibilities based on its vast, learning-fueled “unconscious” mind. AI has been improved by making it more human.

Which is to say: hah! I was right! I called this ten years ago! Pay me a million dollars, Google.


P.S. This Wired article also rather beautifully expresses the match in terms of human/machine symmetry.

The Age of the Companion is Here

doctor_who___companions_by_strawberrygina-d3a6q8i

[I don’t watch Dr. Who and have no idea if this makes sense] [Source]

For the last seven years or so, our technology lives have revolved around apps. The majority of what we do with our devices is a result of choosing an app, then opening it up to get something done*. Call it the Age of the App.

I believe that in 2016 we are moving into a different age: The Age of the Companion.

A companion isn’t a piece of software that you open to get something done. Rather, it proactively works with you to get things done, works autonomously when you’re not around, and may integrate multiple apps and hardware to work.

That’s still a vague definition, but it ties together a trend I’m seeing that hasn’t yet had a good name put to it.

The simplest examples are companions that work directly alongside or inside good old apps (remember apps? From the previous age? Those were the days). Facebook’s M is a companion that lives inside of Facebook Messenger–which is itself a sort of companion to Facebook. M’s artificial intelligence is able to chat with a person, offering assistance with almost anything; a good example is getting it to cancel your cable subscription for you. A companion that helps you avoid the pure evil of Rogers or Comcast is sure to become your best friend pretty quick. Right now, a human on the other end takes over when the AI can’t, but that will become less common as AI improves.

Similarly, I’ve been using an app called Lark that could be considered a companion. It gathers information from HealthKit (e.g. steps taken, weight, sleep) and sends proactive notifications to motivate you to be healthier. When you open the app, it chats with you, Messenger-style, to gather more information and offer advice.

These examples are software, but companions can be hardware too. The Apple Watch is a bit muddled in its purpose, but I think it works best as a companion to both you and your iPhone. It sends notifications, sure, but even when you’re not paying attention to it, it’s gathering data about you, occasionally offering up advice based on that data (“you’ve almost reached your goal” is a good example; “stand up” every hour is … less good). It can pass information on to other companions (like Lark), essentially forming an AI committee that collaborates to better your life.

Amazon is a surprising early leader in the Age of the Companion. The Echo is a semi-creepy always-listening rod that sits in your house and collaborates with various apps to help you, interacting using mics and speakers alone, as if it thinks it’s people. Their Dash technology detects when physical goods (detergent, printer ink, medical supplies) are running low and automatically orders more. Soon, Prime Air delivery drones will take over from there, flying packages to your home in 30 minutes. Right now I wouldn’t consider those single-purpose drones companions, but what if they ask “can I grab anything else on the way?” before coming? What if they ask “can I help mow your lawn while I’m here?” when they arrive? Maybe putting blades on our AI isn’t advisable (especially after they talk to evil cable companies; it might give them ideas), but, you get the gist.

See the pattern? This isn’t just software. It’s not just the Internet of Things. It’s not just artificial intelligence. It’s all these advancements working together to automatically, proactively make a specific person’s life better.

In the Age of Companions, “there’s an app for that” is replaced by “can we help with that?”

It’ll get disturbing before it gets mundane. “Hey uhhh, your watch detected a drop in serotonin, and your calendar said you’re free for a few hours, so I invited all your nearby friends over to cheer you up. Also, watch out, your new puppy is about to air-drop.” But when companions mature, the world will be much different, and hopefully better. The nice thing is that the Age of Companions is already underway, so even if Lark doesn’t prolong our lives indefinitely, we’ll get to experience a different world if we’re around for a few more years.

The future is becoming a complicated place to live in, but at least we won’t have to do it alone.


 

* It’s easy to forget that it wasn’t always this way. Most machines served a single function; a phone was a phone (like, a thing you talked into), a screen was a screen, a camera was a camera, etc. Computers have always had applications, of course, but they were expensive toolsets, different than what we generally consider “apps” today.


 

P.S. Andy Berdan pointed me to another perfect example called x.ai. It automatically works to schedule meetings with a group of people, just by including its address in a regular email. It highlights that, like the other examples above, these companions aren’t apps, and they can run on many platforms.

Our established technology—hardware, sensors, apps, messaging, even email—is becoming a platform for companions. Just like clocks are now only tiny pieces of smartphones, all our whiz-bang gadgets and applications are becoming nothing more than infrastructure upon which companions are built.

BlackBerry Acquires Good Technology: Initial Analyst-Type Thoughts

In a surprise move, BlackBerry has announced its purchase of Good Technology. Enterprise mobility management (EMM) is one of my main areas of focus here at Info-Tech, so this is big news in my little world. Here are some initial thoughts:

  • This isn’t surprising. We’ve been expecting consolidation in the EMM space since the days when it was just a few small vendors doing it. In the past few years, those small fish have been gobbled up by bigger fish, and now the market is dominated by the terrors of the enterprise seas. BlackBerry may having sagging fins and a few missing scales, but it’s still just another step along the expected path to EMM consolidation.
  • This is surprising. BlackBerry isn’t like some of the big acquirers (think VMware acquiring AirWatch) that were missing an EMM product and bought into the market. One could argue that BlackBerry invented EMM with BES, and BES 12 is a perfectly decent, but lagging, cross-platform management suite. Now they’ve acquired Good Technology which is … a decent but lagging cross-platform management suite. What gap is BlackBerry filling? (Hah, “blackberry filling.”)
  • Is this a joint admission of defeat? This seems like two dinosaurs linking arms in hopes of taking a stand against the meteor. BlackBerry failed to evolve when consumerization brought better hardware, and related management technology, into their enterprise territory. Good failed to evolve when those same forces made users realize they’re perfectly able to get work done without a clunky pain-in-the-butt locked-down container. So they’re both behind other EMM vendors, and maybe this is an admission that they need each other’s help to catch up. They can put their enterprise experience and patents together to go (somehow) take back the territory being ravaged by VMware, Citrix, IBM, and MobileIron.
  • What is MobileIron going to do? They’re the last major pure-play EMM vendor left (except maybe SOTI). MobileIron has been stubborn about moving forward without the backing of a larger vendor, developing its own technology when it can, and forming strong partnerships when it can’t. It has fiercely defended its patents against other vendors—such as Good—to remain self-sufficient. But then again, AirWatch seemed like it was doing fine on its own before VMware came along. I just wonder who would grab MobileIron. Google? Samsung? Amazon’s been making some insane moves. Maybe a telecom company like AT&T or Verizon will purchase MobileIron instead of just selling it under a different name.

So, we are living in interesting times for EMM. As it gains footholds in areas like Windows 10 management and the Internet of Thing, moving beyond mobility alone (and maybe needing a new acronym), it will continue to be interesting. It remains to be seen if BlackBerry and Good can form some supergroup that takes back the enterprise stage, or if this acquisition is just the wailing of two dying cats.

[I’m sorry that I can’t decide if EMM vendors are fish, dinosaurs, or musical cats. It’s the Friday before a long weekend and my metaphor skills are like a … sort of like … um … they’re bad.]

Laziness Drives Progress

Image via Rinspeed

I think about autonomous cars a lot.

That’s partly because I don’t enjoy driving. However, a lot of people do. Many of those people promise that they will never buy a self-driving vehicle. I propose that laziness will drive that promise right out of them.

Today, even people who own cars will occasionally take a taxi. To the airport, or out drinking, or when traveling. As taxis become autonomous, they will be even more convenient. Imagine tapping your smartphone, then 30 seconds later a car arrives for you, and you can step inside and keep dicking around on your phone, or have a meal, or get work done, until it drops you off right at your destination. And it only costs a few dollars.

Even people who love driving will take advantage of that once in a while. At first maybe it’ll only be to get to the airport. But then it’ll be when they have a deadline coming up, or are really hung over, or are just feeling lazy.

As those situations become more common, and driving your own car becomes less common, the per-trip cost of owning a car becomes prohibitive. Is it worth tens of thousands of dollars in purchase price, fuel, maintenance, and insurance just to drive a car once a day? Once a week? What about once a month?

“I’m too lazy to drive, just this once” can quickly become “I haven’t driven in a month and I might as well sell my car.” As more and more people succumb to laziness and rely on a cloud of autonomous vehicles, houses will gradually lose their driveways and garages, and the thrill of driving will be confined to go-kart tracks.

In short, human laziness will lead to a more efficient, car-ownership-free world.

I think it’ll be a good change. The people who disagree will be too lazy to resist it.

How Artificial Intelligence Will Kill Science With Thought Experiments

Think about this:

Science—empirical study of the world—only exists because thought experiments aren’t good enough. Yet.

Philosophers used to figure out how stuff worked just by thinking about it. They would take stuff they knew about how the world worked, and purely by applying intuition, logic and math to it, figure out new stuff. No new observations were needed; with thought alone, new discoveries could be created out of the raw material of old discoveries. Einstein developed a lot of his theories using thought experiments. He imagined gliding clocks to derive special relativity and accelerating elevators to derive general relativity. With thought alone, he figured out many of the fundamental rules of the universe, which were only later verified with observation.

That last step is always needed, because even the greatest human intelligence can’t account for all variables. Einstein’s intuition could not extend to tiny things, so his thought experiments alone could not predict the quantum weirdness that arose from careful observation of the small. Furthermore, human mental capacity is limited. Short-term memory can’t combine all relevant information at once, and even with Google, no human is capable of accessing all relevant pieces of information in long-term memory at the right times.

But what happens when we go beyond human intelligence?


New York as painted by an artificial intelligence

If we can figure out true artificial intelligence, the limitations above could disappear. There is no reason that we can’t give rise to machines with greater-than-human memory and processing power, and we already have the Internet as a repository of most current knowledge. Like the old philosophers on NZT, AI could take the raw material of stuff we currently know and turn it into new discoveries without any empirical observation.

Taken to a distant but plausible extreme, an advanced AI could perfectly simulate a portion of the world and perform a million thought experiments within it, without ever touching or observing the physical world.

We would never need science as we know it again if there were perfect thought experiments. We wouldn’t need to take the time and money required to mess with reality if new discoveries about reality could be derived just by asking Siri.

It solves ethical issues. There are a lot of potentially world-saving scientific discoveries held back by the fact that science requires messing with people’s real lives. AI could just whip up a virtual life to thought-experiment on. Problem solved.

Of course, AI brings up new ethical problems. Is a fully functioning simulated life any less real than a physical one? Should such a simulation be as fleeting as a thought?

As technology advances, there will be a lot to think about.

Book Review: Bloodsucking Fiends, by Christopher Moore

Oh look, I’m reviewing yet another vampire novel. Whatever. Just be happy I haven’t resorted to Twilight yet.

Bloodsucking Fiends tells the story of a newly formed vampire who, in order to function in modern society, recruits a human to do stuff for her during the day. Inevitably and for no good reason, they fall in love with each other.

Christopher Moore is known for writing humour, and that is really the main draw here. The ridiculous situations and jokes embedded in every sentence make for an entertaining read.

Plot-wise, it’s not as strong. Events seem to unfold only for the sake of setting up the next event, or sometimes for no reason at all other than for a punchline. Entire plot lines are introduced with good promise, but then left as pointlessly dangling as a classic vampire’s cape. Maybe the two sequels pick them up.

If you’re into sexy vampires, there are certainly less sucky ways to spend your time than reading Bloodsucking Fiends.

Tolerance, Conflict, and Nonflict

A lot of conflict can be explained in terms of differing tolerance levels. A disagreement may simply be a matter of one person hitting their limit before another.

An example will help: let’s say a couple is fighting because he feels like he always has to clean up her mess around the house. It would be easy to label her as a slob and/or him as a clean freak, but maybe they just have different levels of tolerance for messes.

Let’s say he can tolerate four dirty dishes before cleaning up, while she can tolerate five. They agree on most things: too many dirty dishes are bad, cleaning up after one dish is a waste of time, etc. They have no fundamental disagreement. Yet, that one-dish difference will result in him cleaning up every time, simply because his four-dish limit gets hit first. That can lead to other conflicts, such as unequal division of labour, questioning compatibility, failure to communicate, etc. All because of one very small difference in tolerance.

How does this help us resolve conflict? On one hand, it can help foster understanding of different points of view. Many conflicts are not between people on different sides of a line, but rather different distances from the same side of the line. It’s worth noting that most people don’t choose their limits; they are born with them, or they had them instilled early on, or they believe they are rational. Sometimes the resolution to a conflict can be as easy as “ok, your limit is here, my limit is here, and that’s okay.”

On the other hand, living with other humans often necessitates adjusting our tolerance levels. Things run smoother if our limits are close. In the example above, if she dropped her tolerance to four dishes 50% of the time, each of them clean up half the time, and they live happily ever after. Sometimes it’ll have to go the other way too: if he’s not too ragey with disgust after four dishes, he could wait until five, then she hits her limit and naturally cleans up. Either way, hooray for compromise.

This may be a subtle point, but I think it’s a good one: many disagreements are not disagreements at all. It’s not that one person is wrong and the other is right. They’re just feeling different things based on how close they are to their limit. That is much easier to deal with than genuine conflict, especially if it’s recognized as the non-conflict (nonflict) it is.

Double Book Review: Books With Weird Titles Edition: Wool and Draculas

Here are two books I’ve read recently, with not much in common other than having weird titles and being released directly to digital.

Wool, by Hugh Howey

Wool is the first in a long series of books about wool about people living in a mostly-underground silo after some sort of apocalypse makes the outside world inhabitable. Their only view of the outside world is through cameras that get dusty over time, until someone is sent out to clean them (with wool), then inevitably succumb to the poisonous atmosphere.

It’s a small book with big ideas. It’s small in its novella length, but also in its limited scope. It follows one character through an intimate story, never straying too far into the larger consequences of it. Yet the small story explores bigger themes of, among other things, truth and beauty.

There’s nothing too new here, but it’s nicely written, and balances emotional depth with hard sci-fi ideas. The second one was also good, but felt more like a tour of the setting to set up future instalments than a story where anything actually happens. Each instalment is only a few bucks and they are released frequently; it’s definitely worth checking out the first one to decide if it’s worth jumping into the rest of the series.

Draculas, by Jeff Strand, F. Paul Wilson, Jack Kilborn, Blake Crouch, and J. A. Konrath

Yeah, four authors. Yeah, Draculas with an S.

When an elderly, dying millionaire buys a skull with sharp, stabby teeth, then proceeds to stab himself in the neck with it, it starts an outbreak of vampires with similar bitey stabby tendencies. That’s the premise of Draculas, in which vampires are slobbering, near-mindless animals with rows of needle-sharp teeth that need blood like we need air. It’s a refreshing take on the played-out vampire trend.

The violence in Draculas is over the top, managing to be both hilarious and disturbing. It’s clear that all four authors had a hell of a lot of fun writing it, which makes it a hell of a lot of fun to read.

There’s not much in the way of plot; this is a summer action movie in novel form. But having no idea who will live or die keeps it interesting enough, especially with the strong characters. I particularly liked Randall, the borderline-challenged lumberjack whose substitution of “vampires” with “draculas” spreads through the characters faster than the vampire epidemic itself. And I won’t spoil anything, but Benny the Clown’s story takes some of the greatest twists and turns.

Despite the police-lineup-sized list of authors, Draculas is one cohesive novel-length story. On top of that, the Kindle Edition of Draculas comes with a bunch of DVD-like extras in it, including short stories by the authors and deleted scenes. Of particular interest to me as a writer, they included the unedited string of emails between authors that got the project going and worked out the logistics of writing it. It’s fascinating — maybe even more fascinating than the book itself — to get that raw look at the creative process.

Anyway, if you’re into monsters with sharp teeth and their intersection with human flesh, give Draculas a try.

P.S. It was almost impossible to write that review without mentioning sparkly vampires.