Author: remap_content_admin

  • Catchiness strategy

    “We could have called it ‘strawberry intelligence.’”
    Gong CEO Amit Bendov

    What Amit meant by that statement, per Andy Raskin, is that rather than “invent” the category of “revenue intelligence“, as Gong did, they might just as well have invented the category of “strawberry intelligence“.

    The idea being that Gong’s enormous success didn’t have to do with making up a “____ intelligence” category. So it didn’t matter.

    And why not? Maybe because people aren’t logically convinced by an association with a category that a company makes up. I’ll buy that.

    According to Raskin and Bendov, what people cared about was the Gong company/product story – not their cool “revenue intelligence” category.

    Story strategy, you might call it. Like with Storybrand.

    I don’t know if I’ll but that one; there are better stories.

    Here’s what I think happened – in the first place, someone in marketing at IBM came up with “business intelligence” back in the 1980’s. And tech companies have been running with it ever since, including Gong. End of story.

    Why? Because business intelligence or revenue intelligence, or whatever, is an easy to grasp idea that has a nice ring to it – and can be easily riffed on in your sales an marketing materials. It doesn’t have to be logical, it just has to be non-illogical – and catchy.

  • Keep it short

    For 20 years, conventional wisdom on the web has been: more words, longer posts.
     
    In the late 2010’s, Brian Dean explored this through large-scale SEO research. In analyzing blog content and search engine results Brian formalized what SEO experts had long guessed: Google rewards content for length – 1500 to 2000 words, to be specific.
     
    For now, this is still the case. The net effect is agonizing wordy-ness. Just like in school: if you force people to write X number of words, you bore and annoy readers. ChatGPT has mastered the art; people hate it. It’s industrial-age behavior.
     
    To clarify the obvious: just because an article is long doesn’t mean it’s bad. What’s more, long articles can add value in a way short ones can’t.
     
    The average essay in The New Yorker is 3,000 words; many are 6,000.
     
    And long blog post essays about a business, tech, or societal trend can be more, not less, useful for their length, when there’s space to weave together multiple ideas, and provide examples or discuss data.
     
    For example, this article by Andreeson, Why AI Will Save the World: https://a16z.com/2023/06/06/ai-will-save-the-world/.
     
    It’s long enough that it even has a chapter structure that serves as a table of contents and internal navigation.
     
    The presence of a ‘chapter’ approach was another Brian Dean SEO finding actually.
     
    But neither Google nor anyone else requires or incentives Marc Andreeson, a billionaire venture capitalist, to write such a long post. He did it to make the make the central point stronger and more memorable.
     
    And he can also keep it short, just check his Twitter. He does so 999 times of out a 1000.
     
    Here’s the thing: keep it short almost all the time, unless you have an extraordinary reason not to.
  • Jane Goodall on generative AI

    “Ever since I was a child, I’ve dreamed of understanding what animals are saying. How wonderful that is now a real possibility.”
    ― Dr. Jane Goodall

    Quick question – but first some context; bear with me.

    Animal research org Earth Species Project is using neural network AI, of the same sort that I use here to create personalized, conversational discovery interviews.

    What is ‘ESP’ they up to? They’re learning how to document, decode, and ultimately understand animal language, or communication, let’s call it.

    This means we can speak with them.

    Maybe this means I can conduct use Message Maps to conduct a discovery interview with a Baleen whale. Maybe it means we can provide animals with their own AI tools, so they can leverage their intelligence and finally overcome the problem of not having opposable thumbs.

    That’s interesting sci-fi but back to the question: if we can level the species playing field, or at least talk to more of them, can we also use generative AI to level the playing field for other people?

    We wanted that same outcome out of social media and mobile apps, but the results have been mixed at best. Maybe we can do better this time?

    *    *    *

    PS. Yesterday I announced a “pre-release” of Message Maps, in which you can use one part of the tool in exchange for feedback – there’s the official release notes https://github.com/roprice/messagemaps-community/releases/tag/v0.1.0-pre-alpha-Ausangate

    PPS. Speaking of AI and animals, there are more camelids on the mountain of Ausangate than there are open source LLMs bearing their homonyms

     

  • Pre-alpha release of message maps

    Lately on this list and elsewhere I’ve been speaking of:

    • the value of personalized follow up questions
    • garbage-in/garbage-out in generative AI
    • trying to “see” the buyer more deeply
    • chatgpt in our apps, not the other way around

    And generative AI as the user interface for … pretty much everything.

    It’s all connected to message maps, much of it directly.

    Which I’m pre-releasing part of today!

    I’m not releasing the full tool today but I am sharing its AI-assisted discovery interview – the part that mines your mind for overlooked diamonds in the rough. When burnished and set on the right showcase, such diamonds capture attention.

    If you’d like to try to uncover one, reply and let me know – I’ll send an invite code to let you try it out FREE and set you loose to ‘mine your mind’ (:

  • The business course with naked people

    The course where I learned the most about business was a studio drawing class in the Arts and Architecture department of my university.

    In the first lesson, we set down our pencils, folded our arms for 10 minutes, and looked at – wait for it – a small white styrofoam cube.

    There was a smaller white styrofoam sphere perched on top of it. And on closer inspection a small triangular-shaped object behind it. After a while, I saw that the cube had a faint zig-zag pattern in its surface. I also noticed that the sphere had a seam and that there was a slight notch in it just above where it met the surface of the cube.

    And so on – the details were always there but seeing them was work.

    Later when we began to draw live models, it was the same effect times 1000, because if the goal is to represent what you see faithfully, trying your best no matter how badly you inevitably failed, then you needed more. More detail to improve your capture. And there’s always more.

    Thus, ultra high-resolution giga-pixel photos, such as Hubble photos of space, offer less to see than the typical room in the typical house.

    It’s the same effect as a discovery interview – you’re trying to paint a picture. The more detail you want, the more you try to capture, the more questions you ask, the more cues you observe.

    It’s easy to apply this to consulting but it applies to any business owner trying to capture a portrait of the buyer in some moment in space and time, such as when they’re auto-filling a payment form with credit card details from a password manager. There’s always more to see.

     

  • The shortcut to powerful messaging

    .. is asking each other the right questions.

    I have a worked on a list of questions over the years. At times they were informal and half-remembered, at times they were in a notepad, or Google Forms (bleh).

    Maybe 100 questions have cycled through. Most are transactional or straightforward but some rise to the level of “strategy starter questions”. Of this sort, I like 10-15 the most; here’s a sampling:

    • What do you know about your customers that hardly anybody else knows?
    • How do you think things should be done in your area?
    • What do you and your customers both agree on about your industry or market?

    By now these questions and I are on close terms. Sometimes they even come out in casual conversation.. ” what do you know about your dog that no one else does?”

    But ultimately, questions like these are one-trick ponies and not worth as much as the art of question-asking. 

    By themselves such questions get you maybe one small volley of insight. They’re “starter” questions because they’re only really valuable if they ignite an exchange.

    The exchange affords the chance to ask a better question, one that is personalized specifically to what someone has just said.

    This is like in the 5 Whys, which works pretty well, albeit moreso with simplistic problem discovery:

    • What’s the problem?
    • I can’t find a good designer?
    • Why do you need to find a good designer?
    • Because …

    Etc.

    It’s not unlike Gestalt Therapy; focus on what surfaces.

    Holding that focus and asking the right follow-up questions is where the value is: if you do it well, you can find excellent words to put on your website.

  • Art of read-in

    A “read-in” is jargon for the preparation a TV, radio, or podcast show host makes before having on a guest. The more they read, the better their questions and conversation, the more hackneyed questions, that have been previously asked elsewhere, they avoid. In short the better the show for the listener.

    If the the TV host does a read-in a day – before the show..

    and the elite copywriters reads 7 times more material than he thinks he needs – before writing a word

    and Teddy Roosevelt advance-reads an entire book about or related to each white house guest – before their stay

    and the strategy consultant organizes discovery interview topics into themes, cross-verifies them, swots, gap-analyzes,problem-defines, and visualizes – before making a single recommendation

    and the chef performs an hour of mise-en-place chopping – before turning on a single burner

    and the NBA player watches 3 hours of tape – before a playoff game

    and the hathic yogi warms joints, gently stretches, surya namaskars, pranayama-breathes – before assuming any asana postures.

    If all that is true, what does a automated, self-service product to do prepare for its user – before trying to help them?

  • The real problem with ChatGPT plugins

    … is UX: they interfere with the basic experience of using ChatGPT.

    In an interview with Raza Habib of Humanloop archived here, Sam Altman says (paraphrased): “The usage of plugins, other than browsing, suggests that they don’t have product-market-fit yet”

    But I think there could be PMF. The blockage is UX – ironically for a product which is otherwise super simple to use.

    Here’s the bigger list of issues with plugins:

    • Most of them don’t work as advertised – or just don’t work at all
    • Most of them take too long to return results
    • Many of them require you to do many things outside of the ChatGPT UI, which is annoying
    • Many of them duplicate what ChatGPT can do natively – and do so in a worse way
    • But the big problem: all of them, that I have tried, disrupt the ChatGPT UX

    To be clear, I think the concept is great and there are several plugins I like. For example:

    • ChatWithGit
    • ChatWithWebsite
    • daigr.am
    • Scraper
    • AskYourPDF
    • Metaphor (<– this is one is great, BTW)

    But even these disrupt the iterative flow of conversation.

    It’s like when you’re in an enjoyable, animated conversation and the other person says: “hang on, I have to look that up on my phone”.

    Then two minutes of silence later: “shit I can’t find it”.

    Prompting is part priming the model but it’s also priming your own mind – the current plugin UX cuts both short.

  • To hate, despite, loathe, be sick of, and feel sickened by

    Many businesses love data analysis, comprised of the big, little, and even tiny actions customers take. This is great for making cool-looking reports.

    But now we can layer in words analysis, thanks to generative AI. It’s something like sentiment analysis but bigger and more fluid.

    Take for example the words that often come before actions: complaints. I hear you can find them on the Internet.

    We hate, despite, loathe, are sick of, feel sickened by, or annoyed by. We feel agony and pain, we laugh with disgust of disbelief, and we regret that our time, our day, or even our life was wasted by such a stupid company. While our trust was abused, our intelligence insulted, and our money was taken.

    Like with this Kaggle-dataset customer-support chat: “Just wanted to warn people not to waste your time with Delta’s “Best Fare Guarantee” that they supposedly offer. I went through all the steps, had a perfectly valid claim, on the same day, and was denied. This was their response…”

    In marketing, you scan for genuine, emotionally-charged complaints like the above and echo them back near verbatim:

    “Sick of perfectly valid claims being denied – and your time wasted?”

    “Sick of trying to manage 1,000 processes in spreadsheets?”.

    Sidebar: an product that uses your account to log into the Slack or Discord channels of competitors, to proverbially scrape the bitterness of their customers.

    Speaking of products, we love actions analysis – analytics and product metrics like CTA, CPCs, page scroll depth, usage time, day-of-month churn trends. And I admit, there’s valuable insight there.

    But there’s also value in words analysis – a task that a product can now perform as well as a marketing consultant, with the right training.

  • GI/GO

    When you use LLMs to code, especially if you’re a sh*tty coder like myself, you might feel beset upon by the Garbage-In/Garbage-Out principle. My experience here is mostly in:

    • Copilot/Starcoder
    • ChatGPT with GPT3.5/4
    • OpenAI API tools usually on GPT3.5
    • Claude+/Claude-1ook

    BTW, coding is where LLMs most impress. I say that having made 15,000 prompts a month since last fall connected to many parts of life.

    But I feel that my LLM-coding experience is unusual – partly because I’m so low-level; it’s hard to evaluate the code and advice LLM gives me. In contrast, I can detect value from LLMs as they predict, categorize, critique, ideate, graph, write, etc.

    But also, I’m not the “ideal customer profile” – a professional developer. I just want my tool to work, be simple to use, and be secure – I care nothing about things like efficiency or scalability that don’t help users.

    But the LLMs are OBSESSED with them.

    Why? Because the authority-nerd-majority prevails over the content on which major LLMs develop foundational knowledge. (For example, 11 million C4 tokens are from StackOverflow.) And as professionals, of course these people obsess over efficiency and scalability – and “separation of concerns” (eye-rolling hard right now).  And so LLM coding assistants inherit that bias.

    But  just because a bias for professional software development best practices is painful for me, and wastes my life, doesn’t mean it’s objectively bad.

    I’m just using the wrong tool for the job.. albeit the best available.

    LLM products have 4 places, IMO, where they can take in “garbage”:

    1. Pre-training, as on the C4 cited above
    2. Training/fine-tuning
    3. Hardcoded prompting (from product owner)
    4. Non-hardcoded prompting (from product user)

    But here’s the thing – one man’s garbage is another man’s treasure: the challenge will be affordably creating generative AI that works for you and enough others like you.