Are you optimistic about our future with ARTIFICIAL INTELLIGENCE? Avi Goldfarb (professor and author of “Prediction Machines”) shares a brief history of AI, various AI applications that are being used in the marketplace, and specific reasons why we should be optimistic about our future with AI.


References & Links

Avi Goldfarb

The Organizations

Articles, Books & Concepts

The People



Interview Transcript

Professor Avi Goldfarb: Thank you.

Dr. Andrea Wojnicki: Let’s start with some context. I think our listeners would love to hear how you came from studying economics and then working as a marketing professor at a business school to writing a best seller on AI.

AG: Okay, so I was a graduate student in the late 1990s, in economics. And there was this crazy new technology called the internet. So, my dissertation was about competition between search engines before there was such thing as Google.

AW: Just to remind some of us or to provide context for the younger listeners… What were those search engines again?

AG: So AOL had its own search engine. There was Lycos, there was HotBot. And the dominant player was Yahoo!

AW: right.

AG: Yahoo! is still around. And I should say Google was in the final data that I use for my dissertation because it was from 2000. And Google had just come out of beta. And I had 20-something search engines in the data and Google was number 17. So they were there, but they were tiny. My teaching was marketing and statistics. And Ajay Agrawal, my co-author in the book, started this program called the Creative Destruction Lab. And the Creative Destruction Lab is a program to help science-based start-ups scale up. And we started in 2012. And in that first year, there was this company called Atomwise, which called itself an AI company. And we never really heard of AI outside of science fiction. And they were building AI for biotech. And then the next year, there were a couple of AI companies. And then the next year became clear that this was a big new technology, because there was a flood of companies doing AI. And pretty soon at the lab, we had more AI start-ups than anywhere else in the world. Because of some quirky history of Toronto, having an important place in that.

AW: So Toronto is an AI cluster?

AG: Yes. Or at least it started that way, it still plays an important role. But the core technology underlying current excitement of AI, something called Deep Learning. And the perhaps the core researcher and deep learning is man named Geoff Hinton, who’s a computer science professor emeritus now here at University of Toronto. His graduate students and people who worked with him were walking through University of Toronto 10-20 years ago. And those people now run AI research at Apple, Facebook, and OpenAI, etc. So Toronto had this really important role, especially in the early stages. And so then, when people realized there’s a commercial opportunity here – which was around 2012.

AW: What happened in 2012?

AG: The team of Geoff Hinton’s, graduate students, essentially won this competition called the ImageNet  competition. And what ImageNet is a competition to label pictures,…

AW: what does that mean – labeling?

AG: So figuring out what’s in a pixel. You see a picture of a Bernese mountain dog, and the machine has to predict it. “Oh, that’s picture of a Bernie’s mountain dog instead of a chihuahua and instead of a muffin.” So it’s a machine vision competition. Hinton’s team in 2012, was much, much better than anyone who had ever come before, and anyone in that year. He was using the newly applied technology called deep learning. And in some sense the technology goes back 30 years, but we finally figured out how to commercialize it in 2012. And it really worked. And the next year, almost everybody was using deep learning. And so people started to pay attention, more generally, around the commercial opportunities in this particular technology. And that led to lots of start-up excitement here in Toronto – people largely coming out of Hinton’s lab, but also out of Waterloo and a few other places. And then more generally, around the world. That was when the opportunities became clear.

AW: And so then you fast forward to 2018, which is when your book was published. Can you give us a little bit of background in terms of definitions? So you talked about deep learning. And we know a little bit about machine learning, and it may be a subset of AI? I’ve also heard machine learning, I’ve heard you say that machine learning is a subset of computer science. So how do we think about all those terms when we hear them? I know the media can be sloppy when they’re talking about them?

AG: Okay. So artificial intelligence is defined as machines that can do what normally requires human intelligence. It’s a very broad definition. And it’s a moving target. So in the sense that you can imagine the 1940s, artificial intelligence would have been arithmetic. But then, you know, computers do arithmetic really well. In the 1970s, we thought artificial intelligence was chess, the computer can do chess, and we don’t really think about that as AI anymore. So it’s this moving target. Now, what machine learning is, is the branch of artificial intelligence – a type of artificial intelligence research – that has had massive advances in the last few years. So the reason we’re talking about AI in 2019, and we weren’t talking about in 2009, and we weren’t talking about a 1999, is because of machine learning. And most notably, within machine learning this technology is called deep learning.

AW: So, deep learning is a subset of machine learning. And machine learning is a subset of AI, and ….

AG: which is a subset of computer science.

AW: Okay, got it.

AG: But what we should think about machine learning is, is prediction technology. So if you’ve taken a stats course, and you learned how to use regression, or an average to predict something, machine learning is a variant of that kind of tool.

AW: So you just started to answer my question. But I’m looking for something maybe a little bit more definitive. And I’ve been actually thinking about this for the past week. Is machine learning anything more than linear regression? REALLY?

AG: Yes. Because it’s really good.  So we’re no longer… So for what we call supervised learning, which is the dominant type of machine learning that we’ve been focused on, which is when you’re using inputs to predict outputs, which is what you do with linear regression. So that’s the same thing, yes. A bunch of X’s and you’re using those to predict Y. So you have a bunch of images, and you’re using them to predict the label, you have a bunch of sentences, and you’re using them to predict meaning. Okay, it gets trickier. So though we can’t do those things of linear regression.

AW: So it’s an ok model to use when you’re trying to picture AI in your mind?

AG: Yes.

AW: But beyond that, because to be honest, over the last couple weeks, when I’ve been doing a little bit of research on AI… I just keep coming back to: this is what I learned in stats class, it’s all about analysis of variance, finding out which variables predict an outcome, which ones account for the most variance.

AG: Yes, yes, make a few nuances and tricks…  So I have some machine learning text books on the shelf here. If you open them up, the first 10 chapters will look like the first ten chapters in your stats class.

AW: okay.

AG: And, you know, they re-label things. So what we used to call cluster analysis is now a version of unsupervised learning. And maximum likelihood is now part of machine learning in some sense. So, there’s a few things that are different. One is, in inventing a new stats method, you had to prove that it worked in theory, before showing that it works in practice. And the machine learning norms are a little bit different, where they’re very much about showing that you can predict out of sample. And so there’s these hold out samples, and you show that you can predict out a sample. And if you do a good job, then we’ll figure out the theory later. Typically, they do figure out the theory later. But the starting point is: can you incrementally improve the prediction? Rather than: Can you prove formally that this works with an infinite number of observations?

AW: So the other question that I was thinking about, I think, particularly, when I was thinking about the title of your book, Prediction Machines, there’s an article by Malcolm Gladwell, where he talks in the New Yorker years ago, like 15 years ago or more, he talks about collaborative filtering, which is early AI, right?

AG: Yes. One hundred percent. There’s a lineage from that Malcolm Gladwell article to AI if you sort of, you know, track the academic citations. That said, that wasn’t really how we thought about it. Because collaborative filtering doesn’t really seem like artificial intelligence. It just seems like good stats.

AW: It hadn’t been labeled that way?

AG: It hadn’t been labeled that way yet because, what happened is a few sort of nuances of what happened here, which is, you know, I guess, in some sense, communication related, which is: The inspiration for deep learning, the idea of deep learning, was to be inspired by the model of how brains work, how neurons interact with each other, to build a computer that could think like a human,

AW: and then maybe think better than a human, right, faster, more thoroughly, whatever.

AG: Now, in practice, it doesn’t really think like a human. But what it does, what it turned out to be really good at, is predicting, which is the process of filling in this information.

AW: Got it. So can you give us some examples of AI that we use in our everyday lives could be at home or at work? And I was thinking maybe even commuting between the two.

AG: So the most obvious is Google search. Others are your maps, for example, in Waze or Google Maps for that matter. How do they figure out what the best route from one place to another is? That’s a prediction technology. They’re predicting traffic and laying that on top of information they have about the map and speed limits to give you both a prediction about how long it’s going to take to get there, but also a prediction about what the best route to take.

AW: Waze is my number one favorite app on my phone. And I say a couple things about it. First of all, it has I’m sure saved me hours of time. And secondly, it’s saved me mental capacity that I can spend doing other things. I literally said to my son, the other last night, when he was arguing with Waze about how to come home, I said, Can we just let Waze make that decision? And you and I can talk about something that matters?

AG: So yes. To the extent that machines are doing tasks that we don’t, we don’t really enjoy, or that are that take time away from things we’d much rather be doing, this is fantastic. And Waze is a good example of that. But beyond Google and Waze, one of the most exciting applications, I think, are a little bit outside maybe the everyday, but they’re really big. So more than anything, the one I’m excited about is translation.

AW: Right. That’s definitely related to communication. So let’s talk about that.

AG: It’s getting really good. Erik Brynjolfsson and his co-authors have this new paper showing that when eBay added machine translation to eBay pages, it massively increased the propensity of Americans to start buying stuff from Latin America. And vice versa. So this easy translation lead to much more commerce. And the translations are still pretty imperfect. But they were good enough that you could deal with the uncertainty. They showed a some 15 to 20% increase in sales, just because of translation.

AW: Okay, and so it’s on the screen? So you know, when you pull up a certain screen, sometimes it’ll say, do you want to translate this page? Is that what you’re talking about?

AG: So yes, eBay is one example. They were doing that automatically. But if the seller wanted it. But yes, that’s the kind of example. And that just that makes communication much easier. And when uncertainty is reduced, you’re willing to do things more. And so to the extent that people will travel more if it’s a little less intimidating to go somewhere where you don’t speak the language at all. But if you can at least take a picture of a street sign, and now read it and match it to the directions you want to go. That’s a big change.

AW: That’s exactly where I was headed – to travel. So do you think people may be more likely to travel if the language challenges are diminished by AI?

AG: No one’s done that study yet. So it’s hard for me to

AW: Oh, low hanging fruit for your next paper!

AG: So what do we do know – Michael Kummer, has this paper showing that Wikipedia pages have increased travel. So just simple reductions in uncertainty. In particular, what he showed is when … something along the lines of – pages from I think Spain, but in Southern Europe, were translated into German, we saw an increase in German travelers to those towns. OK?

AW: Wow.

AG: And so that wasn’t machine translation.  That was human translation. But it showed that lowering the language barrier… that’s not about travel there. But it’s about getting information about the town. And it led to an increase in travel.

AW: So does AI exist such that you and I could have a conversation, either face to face or over the phone and have our conversation simultaneously or instantly translated?

AG: Not quite yet. To the extent that there has to be a pause, because your meaning isn’t clear until after the sentence is finished. In the short term, it seems unlikely to be as smooth as face to face. That said, for the purposes of business communication, where you’d have a translator, or retail transactions where that little pause doesn’t matter so much, we’re going to see massive advances. For casual friendship conversations, there’s still going to be this awkward pause while you wait for the translation. It’s gonna be harder.

AW: Right. So another question that I wanted to ask you later is about skills and jobs that are likely to grow versus go away because of AI. And I guess the translator is one, right? I mean, we’re going to need some great translators to help us program the artificial intelligence, but then the job may go away.

AG: Yes. There’s the vision of – in some time, you know, decades in the future, these translations will be perfect. And then we don’t maybe have any use for translators. In the short term, a lot of the places where we hire professional translators, today, we need the translations to be very good, right? And to get the nuances of the culture and all that right – in a way that we’re still not there with machines. In contrast, for lots of casual transactions, where maybe you’re not going to hire a translator, but it would be nice to have a guide. Those will be much easier. If I were advising, my children on job things, I might say, translator is not the best way to go. But at the same time, in the foreseeable future, there’s gonna be plenty of things for translators to do, right. But they’ll be they’ll have to be quite skilled, not just about translating language. It’s about understanding nuances and culture and all that.

AW: So for a separate podcast topic actually did a little bit of research for body language. I had an episode about body language. And I stumbled on robot learning where robots are being programmed to both encode and decode body language as another layer of communication, so there’s the verbal, what we hear, and then body language and robots need to be able to perceive and also to communicate, yes, it’s a little bit frightening to think that there’s all these layers of things.

AG: Or exciting, depending on your point of view.

AW: why would it be exciting? And then also, why would it be scary?

AG: Okay, so let’s start Daniel Kahneman. We run this economics of AI conference every year and Daniel Kahneman, who won the Nobel Prize in Economics, and he’s a psychologist, we’re talking about AI. So we asked him to sort of speculate on how we thought about it, and comment on the ideas of the conference so far. And one of the things he emphasizes, we have this idea that, for example, caring occupations are inherently human, and that we’d want humans to be doing that. And he said, I don’t think that’s true. In our old age, do we really want our children taking care of us,? They’re going to get frustrated, they’re gonna get angry. No. We want our children to come and love us and talk to us. But it’s gonna be much better to have a robot take care of us, because they’re not going to get frustrated with us, they’re not going to get angry at us. They’re going to be programmed to deal with our both our body language and our voice, and what we’re asking for, and what the doctor prescribed, and how to figure out how to gently nudge us in that way. And one of the things is, he said along the lines of  – it’s not that hard to create a robot face that people respond positively to, because it’s gentle and kind and cute. So his take was this is, especially with an aging population, this is a great thing, not a bad thing, because we humans can interact with each other on the things that are really human on the love and caring side. And not the frustrating day to day, anger inducing, you know, take your pills, did you rest properly, when did you go to bed — things.

AW: Interesting.

AG: That’s the that’s one version of the opportunity.

AW: So you’re saying that …. some people say that AI can never take our caring-type or nurturing-type roles as human beings. So it could be parenting, it could be nursing, for example, it could be teaching. And in fact, there are examples and good examples of why we should be excited that AI can fulfill some of those roles.

AG: Yes. My examples were different from yours on purpose. So my example was, was mostly about taking care of senior citizens. And that was for a particular reason. It’s not just a demographic issue. But more importantly, the parent child dynamic.

AW: So there’s, there’s a few more valuable interactions that parents and children can have. Yeah. So when they’re older than looking after someone medic, medical needs

AG: Exactly. I haven’t seen any evidence or talked to anyone about sort of the, that it will be better for machines take care of young children. So I haven’t heard that story. I guess I can imagine it to be true. But it’s a bit of a stretch. And  I don’t have research to rely on or at least, you know, people have thought about it deeply to rely on.

AW: You hear the opposite. You hear TV is not a babysitter, and video games should not be a babysitter, and they should not be raising your children. And I have a friend that says, fortnight can’t be my child’s best friend and babysitter.

AG: Right. And so you know, there’s this value to human to human interaction. Although I don’t know if you’ve read The Kids Are Alright by  danah boyd. So what she emphasizes in there is that we have this idea that as our kids, the kids she’s talking about, mostly teenagers are using electronic communication more and more, and they’re interacting with humans less. We tend to think about that as a bad thing. But there’s all sorts of — it’s more nuanced than that. So one example that she talks about, is that there’s evidence of a reduction in risky behavior, because essentially, kids are staying home, and still interacting with their social network but they’re doing it digitally. And yes, there’s risks about doing it digitally, but in some sense, those risks are lower than if you’re actually physically present with somebody else.

AW: right? So if my 15-year-old son is at home at 11, o’clock at night, on a Saturday playing video games, where he’s killing people, with his friends online, that’s actually much less risky, than if he was out at a party, you know, being surrounded by all sorts of risky temptations, right. So why else should we be excited about AI, in addition to it alleviating us of having to decide the fastest way to get to work, or helping us look after our aging parents?

AG: So the highest-level point is similar to any other new technology, in the sense that it’s going to make some economic sense. It’s going to make us more productive. What does that really mean? It’s going to make us wealthier. So society as a whole will have more resources, and we’ll have more choice and how to spend those resources. They’ll be, you know, there’s a big issue on inequality potentially. But if we spread those resources equally, or somewhat equally, everybody can be better off in terms of having more choices for how to spend their time. And how to spend, and how to spend their money, and how to consume. So productivity improvement is a good thing.

AW: And that’s the meta benefit.

AG: The meta benefit comes from a whole bunch of little benefits: what does this particular technology do? And that depends on articular applications of language translation, we can see all sorts of benefits there. You know,…

AW: it’s another step, I guess, in the progress of technology and making the world a smaller place. Are there other ways that we can purposely use AI to make us better communicator? So we’ve got the translating. How else can AI help us be better communicators?

AG: There’s a few pieces to this. So the first piece is who we communicate with. So in some sense, we’re getting prediction, you know, LinkedIn, Facebook, Twitter, to lesser extent, they are all giving us predictions about what information we want to see from whom.

AW: And also increasing the access of to whom we have communication opportunities, right?

AG: Yeah,  technically that’s a platform point, not an AI point, but ..

AW: OK. Fair enough.

AG: Sorry.

AW: Don’t be sorry.

AG: So AI ends up screening, who we have communications with. And so that’s going to affect the nature of communications. And to the extent that we..

AW: Sorry to interrupt but who and what right? The content and the person who’s providing the content, right?

AG: So, yes, absolutely. It’s not that hard to imagine that when you open up your phone to dial or text, somebody, there’s a prediction about who you’re most likely to want to dial or text. And beyond the most recent, most recent right time of day, what city are in, and that can affect who you communicate with. Potentially a very good ways depending on how the algorithms design. So in terms of whether this is good or bad, or how this all plays out, is a question of what we call reward function engineering, which is: what are you telling the AI to predict? Are you just telling the AI? Who are you most likely to call it this moment? Or is there some other longer run maximization problem? So for example, every once in a while, it might throw in a surprise. Oh, you know, I haven’t talked to Andrea in a really long time!

AW: I was just thinking that would be fantastic. If my phone actually reminded me that once a week, you really enjoy calling this person or you should talk to this person.

AG: Right. So that’s just the to figure that out is going to be a challenge on figuring out what to tell the machine to predict.

AW: Right? But if I mean, I could, of course, set myself a reminder, but wouldn’t it be nice if my phone, contained the machine learning an algorithm to predict that without me telling it to?

AG: right, and there’s no technical reason, you know, at least in an abstract level of that couldn’t happen. There’s also it’s practical challenges. But there’s opportunities there too.

AW: So what about AI affecting our communication with each other across the generations?

AG: One reason to call elderly parents is to make sure that taking care of themselves, and in some sense, AI means you no longer have to do that. That can lead to two consequences, one may be better, one may be worse. The better one is, then when you call them, it’s a much more positive conversation. The worst one is maybe you don’t bother to call them anymore. And so there’s an opportunity here, because now the communication can be better. But it might, you know, as the technology advances, it might reduce these casual surprise interactions that enable you to be close to people.

AW: true. So what about in a professional context at work?

AG: So the most obvious AI, in work communications, I think, are these automated replies. So if you have Gmail, Gmail will populate what you’re saying, you know, your email before you even send it. LinkedIn has a similar function on Oh, here’s what you probably want to respond to this person. It’s just something very simple, like, thanks, exclamation point,

AW: or a thumbs up.

AG: or thumbs up or something like that. But the first thing is, that makes some communications more efficient, and allows you to triage and the emails that you actually have pay attention to, versus the ones you don’t know that you can just reply quickly, better. As that technology improves, though, it might lead to much more efficient communication between people at the organization, because right now, as you move to the top of the organization, essentially, you have people who screen lots of your communications.

AW: The Big brother way?

AG:  No, oh, no, I didn’t mean to the big brother way, I meant the executive assistant way.

AW: Okay.

AG: But both, there’s both. And AI creates an opportunity, because potentially it could do that better. Without the sort of whims of somebody…. Humans have moods, and they get hungry, and we make better decisions, when we’re not hungry, than when we’re  hungry and tired, etc., right? And the risk of that, there’s an important risk of that, which is that those decisions – it’s not so much those decisions might be biased (they will be because they can be trained on human data and humans are biased). But they scale. And so the issue, this gets to issues around bias communication, which is – the headline we always see is that AI is going to be biased, and that’s bad. It is bad, but it’s probably better than the average human because we can audit the AI and figure out why it’s biased, and improve it.

AW: Except when the bias is scaled. Is that where you’re headed?

AG: Yeah. So it’s not that we can’t audit it? Yes, but we can even a little biased. If it scales massively, then those few people who are affected by that small bias, end up being massively massively hurt. And so at the individual human level, yes, on average, we’re probably more buyers than any machine is going to be designed well,

AW: but we’re all individuals.

AG:  We’re all individuals. There is some randomness and heterogeneity and how people respond even within an individual. People tend to be more bias from what I understand in the research at when they’re hungry, relative to when they’re not hungry here when their tireless but they’re not tired. Some have all these other. Even within individual, there’s variance that the machines like unlikely ….

AW: Hmm, so there’s factors that impair human thinking that would not affect a machine. Here a not so random question for you that’s related to TalkAboutTalk. What communication skills will be the most important, or maybe the most affected by AI?

AG: Beyond grammar and spelling. Which in some sense, will matter less because you can get that corrected. I actually don’t think it’s that much different…

AW: At least in written.

AG: At least in written. Good point. In terms of high-level skills and communication skills, I don’t think an AI world is that different from a non-AI world. In terms of very particular, here’s a type of communication that we do right now. We have people respond to company queries by hand. And instead, we’re going to have a machine do that. Sure. We’re gonna chatbots instead of right people chatting. But at a high level, the set of communication skills needed, or the set of any skills needed. For example, you what we should teach our primary school and high school age kids? I don’t think it’s really changed. Beyond grammar and spelling being a little less important.

AW: I’m trying to answer the question also, in my own mind,

AG: I’m curious to hear what your thoughts are?

AW: Well, I think you’re right, because everybody says that we don’t pay enough attention to body language and our nonverbal communication, and it’s way more important than we think it is. And I don’t think that’s going to change with AI. Because then when you do meet someone face to face, and you’re not behind a screen, they’re still making a lot of conclusions or judgments about you. Right?. And so it’s still remains important.

AG: I think it’s actually really useful to recognize that a lot of cases, there’s nothing new here. And so, you know, not so much about communications, but when the questions I get most often is, I have a 10-year-old, how should I get them ready for an AI world? In our book, we say, you know, there’s, prediction is getting better. And there’s these other things that therefore become more valuable. And so one is the ability to take actions. Okay, so what does that mean? In a practical sense, you know, there’s a whole bunch of action related jobs that involve physical work. The most obvious are entertainers, whether as athletes or, you know, actors is a whole set of professions that are about enjoying masters and podcasters. Yes, absolutely. That aren’t going away. Then there’s what we call judgment, which is knowing what matters which predictions to make and what to do with them. That’s very much about the social sciences and humanities and understanding what matters to you as an individual, to as us as a society, and to your organization, right. And then there’s actually people who need to build the machine. So there’s all bunch of science and technology there too. And so that covers, you know, science, math, humanities, social sciences, gym, and art and drama. Okay. So those skills how they’re  going to manifest themselves in the workforce will be different. But the skills are all there. And I think the same is gonna be true communication.

AW: Wow, that is much more optimistic than I think I was feeling when I came in here. That’s great. I love that. We’re perhaps overly fixated on the technology stuff. Because we’re thinking: someone’s going to be programming the robot and the rest of us are just, you know, going to be at home unemployed. Well, no, there’s people that are programming the robot, but how do they know what to program? And then all of a sudden, there’s all these inputs. And also on the other side, what do you do with the output? Right?

AG: Yeah. You need people all the way through.

AW: Hmm. Well, that makes me feel better. Now, I’m going to ask you the five rapid fire questions that I asked every guest, you ready? Number one, what are your pet peeves?

AG: People who confuse correlation with causation.

AW: You’re so funny. Or who say things without a citation?

AG: A little bit of that, too.

AW: Okay. Second question, what type of learner are you?

AG: I learn by reading.

AW: You learn by reading?

AG: Yeah.

AW: You absorb the written word?

AG: Yeah.

AW: Third question. Are you an introvert or an extrovert?

AG: I’ve always thought of myself as an introvert. But my job is increasingly going out and talking to people and in large audiences. I find with a large audience, I seem to be more extroverted. But in casual, small talk conversation, I’m definitely an introvert.

AW: Interesting. So it depends on the size of the group that you’re communicating with.

AG: It depends on the context. And, you know, for me to go up and perform that’s different than me chatting.

AW: True. But you feel energized after you speak in front of a large audience?

AG:  Yes.

AW: Interesting. Well, you’re in the right profession then right? Because you’re researching, and then you’re going out and you’re presenting to large audiences, and you’re feeling energized by both.

AG: Yes.

AW: Beautiful, okay. Question number four, your communication preference for personal conversations?

AG: With a small, the very small number of people who I’m very close to, obviously, verbal, just talking is best. Face to face. But otherwise, for even long-distance communication with those people, I like email. Email and text. I kind of use the same thing as because they’re both on my phone. But mostly, it’s email.

AW: Email and text is the same? That’s not the case for other people.

AG: So in terms of quick conversations with, you know, my wife or my friends or a meeting, then they are substitutes.  Perfect substitutes. There are things that you can do in email that you can’t do in text. You know, long formal work emails, I can’t do those on text.

AW: right.

AG: And emojis don’t work quite as well on email. But for quick things, you will have to have a conversation with email and text with the same person in the same half hour.

AW: Interesting. So there’s not a clear preference there. Interesting. Last question, is there a podcast or a blog or an email newsletter that you find yourself recommending the most?

AG: So there’s a handful of people I follow on Twitter that I find really useful in terms of what they link to.  Erik Brynjolfsson at MIT…  posts all sorts of good stuff. My co-author, Joshua Gans in the book also posts all sorts of good stuff. And I follow him. And then there’s a woman whose expert is in international security and technology named Elsa Kania. And I find her stuff on the impact of technology on military fascinating.

AW: All right. So how can listeners connect with you?

AG: LinkedIn is best.

AW: Okay, I’ll put a link in to your LinkedIn address in the show notes, and also to your book, Prediction Machines. And I want to thank you very much for your time.

AG: Okay. Thank you.


YOU for listening!  And READING!


TalkAboutTalk CORE BELIEF:

“When we communicate effectively,

we can be a better friend, parent, partner and work colleague.”



“TalkAboutTalk is the communication learning platform

that enriches our relationships

and enhances our career success

by providing us with

knowledge, strategies and confidence.”  

The TalkAboutTalk weekly email blog is your opportunity to receive one concise email from me each week, highlighting knowledge & strategies that will help us become more effective communicators. SIGN UP NOW!:

TALK soon!







***When referencing resources and products, TalkAboutTalk sometimes uses affiliate links. These links don’t impose any extra cost on you, and they help support the free content provided by TalkAboutTalk.