Entries

  • The business course with naked people

    The course where I learned the most about business was a studio drawing class in the Arts and Architecture department of my university.

    In the first lesson, we set down our pencils, folded our arms for 10 minutes, and looked at – wait for it – a small white styrofoam cube.

    There was a smaller white styrofoam sphere perched on top of it. And on closer inspection a small triangular-shaped object behind it. After a while, I saw that the cube had a faint zig-zag pattern in its surface. I also noticed that the sphere had a seam and that there was a slight notch in it just above where it met the surface of the cube.

    And so on – the details were always there but seeing them was work.

    Later when we began to draw live models, it was the same effect times 1000, because if the goal is to represent what you see faithfully, trying your best no matter how badly you inevitably failed, then you needed more. More detail to improve your capture. And there’s always more.

    Thus, ultra high-resolution giga-pixel photos, such as Hubble photos of space, offer less to see than the typical room in the typical house.

    It’s the same effect as a discovery interview – you’re trying to paint a picture. The more detail you want, the more you try to capture, the more questions you ask, the more cues you observe.

    It’s easy to apply this to consulting but it applies to any business owner trying to capture a portrait of the buyer in some moment in space and time, such as when they’re auto-filling a payment form with credit card details from a password manager. There’s always more to see.

     

  • The shortcut to powerful messaging

    .. is asking each other the right questions.

    I have a worked on a list of questions over the years. At times they were informal and half-remembered, at times they were in a notepad, or Google Forms (bleh).

    Maybe 100 questions have cycled through. Most are transactional or straightforward but some rise to the level of “strategy starter questions”. Of this sort, I like 10-15 the most; here’s a sampling:

    • What do you know about your customers that hardly anybody else knows?
    • How do you think things should be done in your area?
    • What do you and your customers both agree on about your industry or market?

    By now these questions and I are on close terms. Sometimes they even come out in casual conversation.. ” what do you know about your dog that no one else does?”

    But ultimately, questions like these are one-trick ponies and not worth as much as the art of question-asking. 

    By themselves such questions get you maybe one small volley of insight. They’re “starter” questions because they’re only really valuable if they ignite an exchange.

    The exchange affords the chance to ask a better question, one that is personalized specifically to what someone has just said.

    This is like in the 5 Whys, which works pretty well, albeit moreso with simplistic problem discovery:

    • What’s the problem?
    • I can’t find a good designer?
    • Why do you need to find a good designer?
    • Because …

    Etc.

    It’s not unlike Gestalt Therapy; focus on what surfaces.

    Holding that focus and asking the right follow-up questions is where the value is: if you do it well, you can find excellent words to put on your website.

  • Art of read-in

    A “read-in” is jargon for the preparation a TV, radio, or podcast show host makes before having on a guest. The more they read, the better their questions and conversation, the more hackneyed questions, that have been previously asked elsewhere, they avoid. In short the better the show for the listener.

    If the the TV host does a read-in a day – before the show..

    and the elite copywriters reads 7 times more material than he thinks he needs – before writing a word

    and Teddy Roosevelt advance-reads an entire book about or related to each white house guest – before their stay

    and the strategy consultant organizes discovery interview topics into themes, cross-verifies them, swots, gap-analyzes,problem-defines, and visualizes – before making a single recommendation

    and the chef performs an hour of mise-en-place chopping – before turning on a single burner

    and the NBA player watches 3 hours of tape – before a playoff game

    and the hathic yogi warms joints, gently stretches, surya namaskars, pranayama-breathes – before assuming any asana postures.

    If all that is true, what does a automated, self-service product to do prepare for its user – before trying to help them?

  • OpenAI’s platform strategy doesn’t involve ChatGPT plugins

    Yesterday, we looked at Sam Altman’s “no-product-market-fit” comments on ChatGPT plugins.

    Let me de-jargonize that discussion: his idea was that ChatGPT users don’t use plugins because they don’t like them. Whereas my suspicion is that people do like plugins, but if they don’t use them, it’s because they make it harder to use ChatGPT.

    Maybe there’s a design solution to that problem.

    But there’s a bigger takeaway: OpenAI wants to be a platform for software makers – not a software maker itself.

    It’s entirely possible or likely that people at OpenAI know very well they could figure out how to make plugins much more useful. But the bigger point is that maybe this doesn’t fit the overall corporate strategy.

    He actually made this point directly: OpenAI will avoid competing with their customers — other than with ChatGPT”.

    He coupled assertion with this observation: “a lot of people thought they wanted their apps to be inside ChatGPT but what they really wanted was ChatGPT in their apps”

    The latter part, at least, is true. And how will OpenAI further that goal?

    He offered a product roadmap for 2023:

    • cheaper and faster GPT-4
    • longer context windows (even up to a million, though that seems doubtful)
    • easier fine-tuning API, per community feedback
    • stateful API (conversation memory)

    With the possible exception of faster GPT-4, these are meaningless to most ChatGPT users. They’re designed to help developers make better generative AI products.

    Of course, there are no guarantees. And if developers decide what they really want is a plugin-economy app store, I’m sure OpenAI would accommodate them.

    But it looks like standalone generative AI products won’t face direct competition from OpenAI anytime soon – they want to be the platform.

  • The real problem with ChatGPT plugins

    … is UX: they interfere with the basic experience of using ChatGPT.

    In an interview with Raza Habib of Humanloop archived here, Sam Altman says (paraphrased): “The usage of plugins, other than browsing, suggests that they don’t have product-market-fit yet”

    But I think there could be PMF. The blockage is UX – ironically for a product which is otherwise super simple to use.

    Here’s the bigger list of issues with plugins:

    • Most of them don’t work as advertised – or just don’t work at all
    • Most of them take too long to return results
    • Many of them require you to do many things outside of the ChatGPT UI, which is annoying
    • Many of them duplicate what ChatGPT can do natively – and do so in a worse way
    • But the big problem: all of them, that I have tried, disrupt the ChatGPT UX

    To be clear, I think the concept is great and there are several plugins I like. For example:

    • ChatWithGit
    • ChatWithWebsite
    • daigr.am
    • Scraper
    • AskYourPDF
    • Metaphor (<– this is one is great, BTW)

    But even these disrupt the iterative flow of conversation.

    It’s like when you’re in an enjoyable, animated conversation and the other person says: “hang on, I have to look that up on my phone”.

    Then two minutes of silence later: “shit I can’t find it”.

    Prompting is part priming the model but it’s also priming your own mind – the current plugin UX cuts both short.

  • To hate, despite, loathe, be sick of, and feel sickened by

    Many businesses love data analysis, comprised of the big, little, and even tiny actions customers take. This is great for making cool-looking reports.

    But now we can layer in words analysis, thanks to generative AI. It’s something like sentiment analysis but bigger and more fluid.

    Take for example the words that often come before actions: complaints. I hear you can find them on the Internet.

    We hate, despite, loathe, are sick of, feel sickened by, or annoyed by. We feel agony and pain, we laugh with disgust of disbelief, and we regret that our time, our day, or even our life was wasted by such a stupid company. While our trust was abused, our intelligence insulted, and our money was taken.

    Like with this Kaggle-dataset customer-support chat: “Just wanted to warn people not to waste your time with Delta’s “Best Fare Guarantee” that they supposedly offer. I went through all the steps, had a perfectly valid claim, on the same day, and was denied. This was their response…”

    In marketing, you scan for genuine, emotionally-charged complaints like the above and echo them back near verbatim:

    “Sick of perfectly valid claims being denied – and your time wasted?”

    “Sick of trying to manage 1,000 processes in spreadsheets?”.

    Sidebar: an product that uses your account to log into the Slack or Discord channels of competitors, to proverbially scrape the bitterness of their customers.

    Speaking of products, we love actions analysis – analytics and product metrics like CTA, CPCs, page scroll depth, usage time, day-of-month churn trends. And I admit, there’s valuable insight there.

    But there’s also value in words analysis – a task that a product can now perform as well as a marketing consultant, with the right training.

  • GI/GO

    When you use LLMs to code, especially if you’re a sh*tty coder like myself, you might feel beset upon by the Garbage-In/Garbage-Out principle. My experience here is mostly in:

    • Copilot/Starcoder
    • ChatGPT with GPT3.5/4
    • OpenAI API tools usually on GPT3.5
    • Claude+/Claude-1ook

    BTW, coding is where LLMs most impress. I say that having made 15,000 prompts a month since last fall connected to many parts of life.

    But I feel that my LLM-coding experience is unusual – partly because I’m so low-level; it’s hard to evaluate the code and advice LLM gives me. In contrast, I can detect value from LLMs as they predict, categorize, critique, ideate, graph, write, etc.

    But also, I’m not the “ideal customer profile” – a professional developer. I just want my tool to work, be simple to use, and be secure – I care nothing about things like efficiency or scalability that don’t help users.

    But the LLMs are OBSESSED with them.

    Why? Because the authority-nerd-majority prevails over the content on which major LLMs develop foundational knowledge. (For example, 11 million C4 tokens are from StackOverflow.) And as professionals, of course these people obsess over efficiency and scalability – and “separation of concerns” (eye-rolling hard right now).  And so LLM coding assistants inherit that bias.

    But  just because a bias for professional software development best practices is painful for me, and wastes my life, doesn’t mean it’s objectively bad.

    I’m just using the wrong tool for the job.. albeit the best available.

    LLM products have 4 places, IMO, where they can take in “garbage”:

    1. Pre-training, as on the C4 cited above
    2. Training/fine-tuning
    3. Hardcoded prompting (from product owner)
    4. Non-hardcoded prompting (from product user)

    But here’s the thing – one man’s garbage is another man’s treasure: the challenge will be affordably creating generative AI that works for you and enough others like you.

  • Still unevenly distributed

    According to this Pew study from last week:

    • 18% of U.S. adults have heard a lot about ChatGPT
    • 39% have heard a little
    • 42% have heard nothing at all

    I’m tempted to leave it at that, because for me contemplating that stark reality is a thought exercise.

    But if you extrapolate a bit, you find that just 4% of US adults find ChatGPT useful.

    So what’s going on here – how can a technology that the vast majority of people find non-useful be more revolutionary than anything ever, except maybe the technology that got our knuckles off the ground for good (rock tools)?

    I think the answer is that it can be that revolutionary. Generative AI and machine learning in general may shape our economy and society, maybe even our genetic evolution. For better or worse, who knows.

    But it’s another case of the future being here but unevenly distributed.

    Yes, generative AI rips machine learning out of the graspy talons of the programmer and puts it in the lap of the knowledge worker – but that’s still a pretty small segment of society.

    Maybe the 4% number is also another case of social media feeds drowning out understanding.

    For most in product is, one way to proceed here is to ask: how do I leverage LLMs without relying on the web app called ChatGPT?

  • The other definition of strategy

    The explicit definition of strategy that I shared earlier is brittle. Sure, it applies to most of my use cases, but definitions like this will eventually break if you apply them to enough situations. You shape them to meet your needs.

    But connotative definitions are hard to control.

    The connotation of strategy is basically something like “smart“.  It’s not that you can always interchange the two words, but there’s a strong association.

    • Strategy consulting = giving smart advice
    • Strategic planning = making a smart plan
    • Strategic hiring = being smart about hiring
    • Strategic investments = making smart investments
    • Strategic planning software = smart planning software

    This is why it’s dangerous to rely too much on the word – firstly, it’s over-used. Secondly, it basically amounts to you claiming to be smarter, or have smarter software, or a smarter approach. Or whatever.

    That can come off as boastful, even glob, and it fails to compensates for not providing specific reasons why your solution will be worth more than it costs.

    Ironically then, using the word that connotes smart might not be that smart.

    On the other hand, if you can offer specific narratives, and at least hold an explicit, meaningful definition of what strategy means to you, inserting it into your messaging can help you sell.

  • Marketing to investors

    Marketing to your team and  marketing to yourself have this in common with marketing to investors: tell stories which reveal that your product creates more value than it cost.

    Salesforce CEO Marc Benioff has always been great at all three.

    For example, early on he convinced himself that software should be delivered 24/7 over the Internet when most thought this was crazy.

    He’s also good at marketing to the team. For example, one story he tells is about an “Ohana” (Hawaiian for family) made up of the Salesforce’s hundreds of thousands of employees, partners, developers, customers, etc – everyone has their role in the family and knows its traditions. This conceit has survived multiple layoff rounds over the years; there’s still a certification in it.

    Investors are too disinterested to be part of the Salesforce Ohana but Benioff markets well to them anyway.

    On a recent quarterly earnings call, he announced the release of Einstein GPT and teased the imminent releases of Slack GPT and Tableau GPT.

    He packaged the product roadmapping with concrete profits and a grandiose “prediction”.

    The coming wave of generative AI will be more revolutionary than any technology innovation that’s come before in our lifetime, or maybe any lifetime. Like Netscape Navigator, which opened the door to a greater Internet, a new door has opened with generative AI, and it is reshaping our world in ways that we’ve never imagined.

    Actually, that’s not a prediction; it’s an implication: Salesforce fully grasps, is passionate about, and will capitalize one the newest wave of technology.

    Whether this is true or not, I have no idea, but it’s a good story altogether – and there’s a similar version for the kitchen-table investor.