Entries

  • The best storyteller I ever met ✨ II

    The best storyteller I ever met was my dad. Or at least he’s one of them – I’m biased of course!

    As I was saying, good storytellers like him know when and whether to tell a story in the first place.

    And like any reporter or standup comic, they don’t bury the lede or the punch line; they reveal the so-what upfront

    A good storyteller also:

    1. Knows their audience and senses the moment to gauge interest in their story
    2. Gauges that interest with a hook – maybe a joke or funny observation, maybe an odd fact or quote. Lets it go if there isn’t interest.
    3. Tells the full story as one short statement, if there’s only slight or polite interest
    4. If there’s higher interest, tells the story in more detail but breaks it into chapters or parts
    5. Ends it neatly after any of those parts – or when the story is done

    I think you can tell a great story in one sentence. But the great storytellers can tell a long, detailed story, per step 5, and make it just punchy as the one-sentence one.

    This means breaking the story into small parts in your head and rearranging them chronologically if needed. And the rarest skill: the ability to summon a great number of facts, moments, and personal details and roll them all into detailed but concise passages of speech. It’s almost as if you think and speak in full, professionally-edited paragraphs.

    Until it all comes together in a tidy ending, like a gymnast perfectly sticking a landing.

    Yes, but how do you apply all this to business solutions storytelling?

    If that question interests you, an example approach is the Brightr Story Framework, which makes a case study a story by highlighting the people and key moments of a business engagement.

  • The future of personalization

    When you open Instagram, Facebook has 10 million ads to show you. But it chooses just one, to begin with. How? Step by step. To vastly oversimplify for illustrative purposes, Facebook chains targeting filters together in a step-by-step sequence: gender, age, location, hobbies, bidding/budget, etc. The point I want to stress is that the filters don’t happen all at once; they’re sequential.

    This is how code interpreter depicts it:

    Personalizing through chaining filters (or prompts)

    By the way, this is also how prompting an LLM works when you chain prompts together – a process of elimination is employed. Thus you can deliver more personalized content, analysis, or summarization.

    In a way, chained filters depicted in the graphic are a form of personalization, but a crude one. I make 4 different ads and use the platform to show them to 4 different kinds of people. Pretty limited.

    But over the last 10 years, there are ever more “dynamic” ads – dynamically personalized in real-time based on viewer data. The classic example is the shopping cart-data-as-ad: you put shoes in the cart on one site but don’t buy yet, then you see a massive ad for them on another. That’s called retargeting.

    But these are also crude as a form of personalization, in one part because they rely heavily on ethically questionable data mining. But in another part because they offer limited inputs. Or they are just wrong.

    You might have seen ads like Home prices dropping in [your city name]”, where the city named is an hour’s drive away.

    Generative AI-enhanced ads, on the other hand, let advertisers pull in almost infinite inputs to dynamically personalize an ad to an individual.

    It also allows the user to help create the ad they want to see, or customize, in real time using what we now call “prompting”.

    If businesses can achieve that, they can leapfrog over the current regime of personal data mining and offer intriguing, noninvasive, and more personalized ads.

    BTW, wherever I say ‘ad’, you can also swap in the word ‘content’.

  • The best storyteller I ever met ✨

    A CEO should be careful about when to use storytelling – resist its allure.

    “The really important issues of this world are ultimately decided by the story that grabs the most attention and is repeated most often”
    Annette Simmons, Whoever Tells the Best Story Wins

    Let’s file one this under “sad but true”. In the car ride of life, do you prefer a new story, or an often repeated one?

    Simmons’ thinking echoes David Gergen, who once introduced a communications strategy called “Story of the Day” to his new boss, US President Ronald Reagan. That’s mass communications thinking and it’s dated.

    Anyway, the best storyteller I ever met was … drumroll…

    it depends. (Sorry)

    It depends on:

    • the storyteller’s background
    • the audience
    • the story itself
    • how it’s told

    You might say that the goal isn’t to be the best storyteller but the right storyteller.

    And good timing doesn’t hurt 🤷‍♂️

    There’s also something else: knowing when and whether to tell a story in the first place. BTW, maybe we overuse the word storytelling and its synonym, narrative?

    In “Seduced by Story”, Peter Brooks sardonically points out that the “Starr Report”, the official investigation into the conduct of Bill Clinton preceding his impeachment, contained this headline above its findings:

    “The Narrative.”

    Wait, is it a congressional report related to a consequential legal matter – or is it a story? Also, shouldn’t judge and jury decide the story, not a prosecutor?

    How about sometimes we offer something else – facts, opinions, research, code, flavor. And others can make that into a story if they wish.

    The best storyteller I ever met (ie the best one for me) was – is – very judicious about when to tell a story. I’ll reveal his other storytelling qualities in the next one.

  • Recasting the role of the data scientist II

    I got an interesting response to this post from friend of the list and actual data scientist (one of several on this list), who also understands how LLMs are designed and built on a nuanced level.

    Because his is a more nuanced view than the ones presented in the paper referenced, I’m sharing parts of it, with his permission. Don your nerd glasses now, as we plunge you deep into the future nexus of generative AI and enterprise data science.

    John responds:

    “I think the enterprise data scientist will be transitioned to product manager, one who … will also become more embedded with marketing, as well as data visualization and first-cut analysis. 

    But I also think the enterprise will still require, especially in large orgs, professional data scientists. I’ve seen this first hand with a multi-billion dollar grocery chain. They found so much value sitting on the floor of their mainframe, that it required a hardcore data scientist to go extract it.

    Even training data-science-focused LLMs – just operating them you’ll need a scientist. But to handle, manage, organize, mine, store properly, scale, and so many other things, the ever growing amount of data enterprises will have, you’ll _require_ a full scientist. “

    What I get out of this is that even if some in a data science role take on responsibilities that look more like product management, the demand for data scientists will still be there.

    Makes sense if you consider what it entails: statistical analysis, pattern recognition, inferential and predictive modeling, data transformation, pipelines, and architectures – plus the oversight of custom-built LLMs that help with all of the above. Also: domain expertise and the ability to communicate insight.

    The generalist finally has the upper hand in the generative AI era – but the data scientist was a generalist to begin with.

  • The new layers

    The bottom layer of any business is the expensive problem(s). But you layer so much on top of that – people, business model, strategy, requirements & plans, tech stack, product design, messaging, pricing, and more.

    The art of layering is choosing the right layers at the right time and skilfully binding them together.

    There’s a layering process in nature too – here’s a tiny sliver of it:

    “By producing sugars and proteins to entice animals to disperse their seed, the angiosperms multiplied the world’s supply of food energy, making possible the rise of large warm-blooded mammals. Without flowers, the reptiles, which had gotten along fine in a leafy, fruitless world, would probably still rule. Without flowers*, we would not be.”
    ― Michael Pollan, The Botany of Desire: A Plant’s-Eye View of the World

    *by “flowers” he means fruiting/flowering plants, grasses, shrubs, trees, etc – angiosperms

    From here you can choose your metaphors for the technology and business world – if “cloud” are the angiosperms, what comes next? Dropbox.

    If LLMs are the angiosperms (and OpenAI is the current keystone species) here’s an inconclusive list of what comes next:

    1. Products that sell LLM default behavior, text generation, like Jasper.ai
    2. Products that sell something else, like analysis, summarization, categorization, and intelligent integration, like MyAskAI
    3. New services models, such as the emerging “AI automation agencies” (more than any tech startup, this phenomenon reminds me of the dotcom boom)
    4. Legacy software, like Notion or Brightr, that integrate AI
    5. Ecosystem add-ons, like aptly-named PromptLayer.com
    6. New LLM providers who research, design, build, deploy, host LLM services

    That’s a lot of new stuff – new layers.

    But that’s not surprising if you do the math on the Pollan quote – there are 1.5 trillion kilograms of humans, cows, pigs, and sheep on planet earth, all thanks to angiosperms.

  • Recasting the role of the data scientist

    Maybe I have achieved a one-day mind-meld with Azeem Azar, because he also noticed an interesting and timely research paper: What Should Data Science Education Do with Large Language Models?

    I say timely because of OpenAI’s release code interpreter plugin for ChatGPT becoming widely available.

    Sidenote 1: they’ll likely they’ll follow up with the equivalent functionality (probably superior, per Altman’s mission) via the API. But who knows when.

    Sidenote 2: the paper’s title almost feels like an article you would see in a tech blog, not an aggregator of academic research papers. Is Arxiv.org the new Hacker News?

    But it’s a much more thorough analysis than your average blog post, though, with coverage of every area in which LLMs are expected to impact data science, from education to professional practice . As to the latter, page 4 contains the passage that probably best sums up the paper:

    “LLMs have the potential to revolutionize the data science pipeline by simplifying complex processes, automating code generation, and redefining the roles of data scientists. With the assistance of LLMs, data scientists can shift their focus towards higher-level tasks, such as designing questions and managing projects, effectively transitioning into roles similar to product managers.

    I don’t doubt this – one example of many of the opportunity for the laptop worker to subcontract the dirty work to AI, then align their daily work more closely to designing and delivering solutions.

     

  • Why strategy and messaging are inseparable

    How do you translate a high-level strategy into specific, hard-to-reverse trigger decisions – like hiring an executive, taking your sales and marketing team to a conference, or developing a new product offering?

    By first expressing the strategy as messaging or even sales copy.

    This isn’t just because it’s easier to undo words. It’s also because you better understand a  business or product strategy once you write it down and expose it to public scrutiny.

    Messaging keeps business strategy honest; wherever you get one, get the other from the same place.

    To make this concrete, I once had a predictive AI client whose software performed a very specific task: locate the perfect site, in terms of long-term profitability, to build a hospital or clinic. The idea was to do this with Big-Data AI rather than consulting horsepower; from 3 months to 3 hours.

    I was surprised to learn that although much of the data was purchased, much was manually curated if not created by hand.

    In fact, this business model brought the AI software firm into possession of a valuable data set: businesses that sell products and services to hospitals and clinics: hospital suppliers, a niche industry unto itself.

    Thus a plan for new revenue was hatched: sell a hospital supplier lead list.

    1. But it wasn’t until we started to write about this offer – by creating ads and a landing page – that the strategy crystalized: rather than try to act like a typical lead prospecting database, act like anything but.  This informed the design and delivery of the product itself.

    The high-level strategy for the new product was: embrace your strengths and the competition’s weaknesses. But we didn’t know why until we wrote sales copy.

  • An example of the failure of using AI to automate

    A recent academic research paper came to the same conclusion as a similar paper from June 28th concerning plagiarism at universities in age of large language models: software cannot reliably detect AI-generated content.

    From the abstract:

    The research covers 12 publicly available tools and two commercial systems (Turnitin and Plagiarism Check) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text.

    That mostly summarizes the answers to the 5 research questions identified on page 3 of the paper (eg. RQ1: can detection tools for AI-generated text reliably detect human-written text?).

    One important question isn’t on that list:

    Can humans detect AI-generated text reliably?

    Nor is its follow-up:

    Can humans detect AI-generated text reliably when skillfully using AI tools?

    The last question is the most important one – and the only one that has a chance of being a yes.

    BTW, the study looked at 14 detection tools (listed on page 11), such as GPTZero, with 3.5M in funding, and of course Turnitin, acquired at 1.8B in 2019. (Sidenote: the study left out the least brittle plagiarism detection tool).

    That’s a lot of money being spent on some missed points. For one, would we even need these detection tools if we didn’t still assume that quantity of writing was an effective learning approach – or a reliable way to measure how well a student has learned something? And that’s probably the bigger question.

    But here’s where I’m focused: some assume that generative AI tools are here to automate complex solution soup-to-nuts; they’re not. They are here as aids to skillful, thinking mammals with big neo-cortexes. You whisper to the tools in the context of comprehensive approaches to complex problems.

  • Make that 22 ways

    “Your mission, should you use to accept it, is to increase the client’s rate of making trigger decisions. Except when it isn’t”
    Venkatesh Rao, Art of Gig

    This post is sort of a follow-up to last Friday’s post, 21 ways generative AI will transform marketing – I think I overlooked an important one.

    I didn’t realize until I today when pulled out the Kindle version of Art of Gig on the metro back from a TOA23 event. As usual, I skipped right to addendum, 100 rules for Consulting, and my eyes landed on the point above about trigger decisions.

    For context, by ‘trigger decisions’ Rao means important, strategic decisions that trigger a sequence of other smaller decisions and actions.

    I had written in ’21 ways’ that:

    18. Marketing strategy will be produced, altered, and personalized closer-to-instantly than ever before, with the aforementioned Gi/Go still in effect. 

    And then:

    19. For that same reason, and others above, marketing and sales content will also be produced orders of magnitude faster

    This still holds – more accurate and more rapidly produced strategy means more timely and compelling content. But’s a narrow view.

    The broader view is this: like a skilled strategy consultant, generative AI that helps with marketing strategy will accelerate the rate at which we make trigger decisions – not just trigger decisions in marketing, but in all areas of the business.

  • Little known fact about impostor syndrome

    Here’s the fact: it’s not in the DSM.

    That doesn’t mean it doesn’t exist, though; it just means it’s hard to diagnose as a distinct mental disorder.

    Impostor syndrome exists, instead, as a psychological experience – patterns of thoughts, feelings and behaviors where the through-line is anxiety, even pre-emptive anxiety.

    “If I frame it that way, people will think I’m _______”.

    Yes, this is normal and yes, it means – congratulations – you’re not a sociopath. But let’s be real, it also cuts your revenue, ultimately.

    And impostor syndrome can be a group experience too – people at companies feed one another narratives, or just self-deprecating jokes, that validate doubts. This makes it easy:

    • take refuge in the “features” of your product, rather than go hard on unique value proposition
    • emphasize how you’re the same as competitors; you belong, really
    • indulge in aloof, indirect messaging
    • say too much or try to hard to prove your worth

    Actually, this is why firms end up hiring an external strategist – for the latter, making a strong claim about how the company is different is emotionally uncomplicated.

    Message Maps has the same effect. Given enough information in discovery, it crafts a positioning strategy that leapfrogs right over your company’s impostor syndrome and stakes your flag in the ground. Which is essential for a tool that rapidly creates sales and marketing messaging to help grow revenue.