Author: remap_content_admin

  • Talking game

    “People don’t think how they feel. They don’t say what they think, and they don’t do what they say.”
    – David Ogilvy

    I don’t know about other people, but that pretty much sums me up.

    Which leaves the question – how do you tell what kind of technology solution people want from you?

    I don’t know of any formulaic answer, and I don’t think anyone does, ever has, or ever will.

    But I know that talking to people about it doesn’t hurt.

    Academics have a term for this – for talking to people with the goal of determining what they want: qualitative research. That’s as opposed to “quantitative research,” which refers to research such as large-scale surveys.

    I’m sure both have their place, but for sophisticated products based on expertise, I’d bet on qualitative. Cue the next Ogilvy quote:

    “I notice an increasing reluctance on the part of marketing executives to use judgment; they are coming to rely too much on research, and they use it as a drunkard uses a lamp post for support, rather than for illumination.”

    That’s a banger of a quote, but there’s more to it than judgment. In fact, the ability to converse well, which, of course, means talking well and listening well, is like Jack’s magic bean – lots of potential upside.

    Active listening has been memed to death, rightfully maybe so, but is talking an overlooked skill? How do you articulate what your business does to literally any person? How do you frame it so it makes sense and is interesting, especially if they’re an ideal customer?

    Here’s the thing – if you learn to talk well about your product, you have the foundation for  great brand messaging.

  • Code plus definition

    SaaS product is made of code and definitions (and lots of other things, but let’s leave at that for now).

    Let me back up and share a quick story.

    One of the first blog posts I ever wrote, back in 2007, was not only badly written but quite stupid as well;   a copy of it will exist on archive.org probably until my dying day. This is great – it will keep me humble (:

    To summarize that error of a post,  I declared in a loud fashion that “Highrise is not a CRM”.

    Jason Fried responded in the comments,, “I guess it depends on your definition of CRM”.

    For context, Highrise was 37Signal’s new CRM product, hot on the heels of Basecamp. They had just released it. To my mind, it didn’t meet the standards of Salesforce and other enterprise CRM’s I was use to. Not enough bells and whistles.

    But I assume this the definition of a CRM Jason was working with: “a product that can manage and organize interactions and relationships with customers and potential customers”.

    Could be a notebook, an Excel spreadsheet,  a Salesforce instance – or Highrise.

    Here’s the thing – 37Signals didn’t just have a SaaS product, they had another piece of IP to go with it: their own definition of CRM.

    What’s the most important term for your business to define?

     

  • Explaining vs engaging

    When you first meet someone, it’s important to nail the first impression! I mean, you have to explain your entire personality, good character traits and bad character traits, your entire range of sensibilities, knowledge, experience, socio-economic background, and emotional tendencies—your entire story. You’ve got to nail this on the first try—ideally in so few words they’d make a great headline.

    Only then will there be relationship success!!

    ….

    Ok, sorry for the extended exercise in reductio ad absurdum. I don’t like to indulge in sarcasm too much, but I wanted to drive the point home.

    Here’s the point in simple, sincere words—unless you’re selling a pencil, it’s never possible to say it all. But also—you don’t have to, and you don’t want to. Because you will look weird if you try.

    Conventional business wisdom, on the other hand, holds that you have to convey everything all at once to get your point across. And the implication is that there’s some magical art to doing this that only marketing and branding strategists (or anyone with a hard-hitting job title) are capable of.

    This belief lets groups nitpick messaging—”but it doesn’t say anything about x” or “it doesn’t emphasize our x.” As if it needs to.

    The astute premise of The Anatomy of Humbug is that every theory popular in advertising, marketing, and branding – every single one – is unscientific. In other words, when you strip away the humbug, the rational for each theory (USP, Four Ps, Ansoff, SWOT, etc, etc) boils down to “Oh it’s common sense“.

    And you know what, this is often true – some great ideas are often based in common sense. But the idea that you have to explain it all on the first try is not one of them. The goal in introducing yourself, your product, or anything else is not to explain – it’s to engage.

  • Ideas in a state of suspended animation

    “Another point I might elaborate on a little is about words. We tend to forget that words are, themselves, ideas. They might be called ideas in a state of suspended animation. When the words are mastered the ideas tend to come alive again.

    Thus, words being symbols of ideas, we can collect ideas by collecting words. The fellow who said he tried reading the dictionary but couldn’t get the hang of the story, simply missed the point that it is a collection of short stories.”
    – James Webb Young, A Technique for Getting Ideas

    If you read this with an eye on your own business, you might come away with this:

    1. that those ideas that are important to your business – that they are captured in words
    2. but these words aren’t useful (“alive“) until you master them
    3. that as you master these words, stories will emerge

    For a typical B2B tech firm, there are 5 to 10 such ‘most important ideas/words/terms. These tend to translate directly into core messages.

    Concrete example: Mulesoft.

    One of Mulesoft’s core ideas has always been connection: enterprise connection and integration between offline and online systems is easier than people always assumed.

    Another core idea is transformation: There’s a systematic, cloud-based solution to connection and integration that can transform an organization.

    Those are the core ideas spelled out in detail – mastering them allows for strong brand messaging:

    1. The word “Anypoint“, a brand name, conveys the Mulesoft concept of easier-than-expected connection
    2. The words  “Connect Anything, Change Everything“, a slogan, conveys the concept of transformation-through-connection

    Those words belong to Mulesoft but to get them, they also had to master words that belong to everyone – like the Connection and Transformation, but also Data Synchronization, API Integration, and more.

    What words does your business need to master – or has it mastered already?

  • What’s it about?

    I wanted to give you a brief idea of what this list about. The signup page currently says:

    Art of Message
    A daily email list about strategic brand messaging for tech firms
    Get two-minute emails with insights that can help refine your message and improve the way you build, market, and sell your products

    That’s 100% accurate but I want to provide more detail and context. Firstly, I’m writing to this list as I manage other aspects of the company it belongs to, which means I am busy. I bet you are too! For that reason, I will keep these emails very short – under 300 words.

    Secondly, I use the terms messaging, strategic messaging, and brand messaging almost interchangeably. Related, “child” terms with more divergent meanings include: product messaging, solution messaging, sales messaging, and marketing messaging. But “Brad messaging” otoh is a bit different; if you see me use it, that’s probably a typo!

    That’s the context, here’s some detail on the ideas I write about:

    • that anyone can create brand messaging through specific practices and concepts
    • that brand messaging is the prerequisite for all kinds of sales and marketing activities
    • that good messaging attracts better people/community, and gets people on the same page
    • that product messaging can and should shape the design of your product or solution
    • that brand messaging can transform customer support, success, and consulting services
    • that you can and should constantly refine your product messaging ideas by learning from the news and markets – I may include a fair amount of tech news analysis

    The main idea is that high-quality messaging can be a crucial part of a tech business and affects sales, marketing, product/solutions, and people.

    Will I deviate from these themes? Probably. But for now, there’s the answer to ‘What’s this list about’?

  • Intellectual Property (IP)

    IP is knowledge, information, ideas, or creative work with the potential to provide a competitive advantage or economic value.

  • Two business books a day vs GPT

    This will sound unlikely, but I think there are few who use GPT in so many weird ways and in as much volume as I do. According to OpenAI’s analytics on my account, I’ve queried GPT an average of 19,000 (~650 queries/day) times a month since November 2022 – and even before then (before ChatGPT), I was querying playground several thousand times a month.

    And that’s excluding the 1000s of inadvertant GPT queries made on my behalf through copilot.

    Now to be fair, the vast majority of those queries (on ChaptGPT, Playground, via Copilot, and via the API) were related to coding, where I use the tools as sort of a real-time debugger, code advisor, ideation tool, documentation, and just general junior programmer, all through a process of near constant copy and paste (Copilot is radically inadequate relative to my customized prompting).

    This immediately raises the quality vs quantity question.  For my purposes, both work. Great questions are like levers in their ability to unlock insight, but do they achieve more total growth than 10s of thousands (as in my case) of normal questions? I think there’s a way to achieve a high-quality outcomes using this approach of flooding GPT with iterative questions, favors, and commands. After all, children ask an equally absurd number of questions (about 400/day at age 4), to their considerable benefit.

    But never are 4 year old’s questions answered at such length.

    One consequence for me of my rat-a-tat querying style has been the vast number of GPT-generated content that I have soaked in. I’m probably “fed” about 130,000 words a day (650 queries per day = 650 responses x 200 words/response average – my guestimate).

    That’s easily two business books worth of words. A day.

    *     *     *

    Would I be better off reading two business books a day than cramming 130,000 words into my eyes? 

    Here’s the yes argument:

    “Yes, because

    • even the schlockiest and most poorly written business books have the advantage of depth – it stays on topic for an extended period of time.
    • Time-extended humman effort was exerted on it. New ideas come out of that process which I highly doubt that AI will ever be capable of.
    • Of course, you might hear an idea that’s new to you, but it didn’t create it.
    • Furthermore, though this applies only to the few, well-written business books (eg Ogilvy on Advertising), the writing style itself holds intense aesthetic appeal of a kind may never be attainable by a language model.”

    Here’s the no argument:

    “No, because

    • I cannot direct the words that come out of a business book. As I read it, I cannot ask the book a question about the book. This is something I can now do with GPT and a sort of GPT-enhancement tool called Langchain.
    • Not only can’t I affect what comes next, I can’t affect the manner in which it is said; the book is only ever written in the style of its author and this may become tiresome. Or it may be inappropriate to my mood or question.
    • Also, when I read a book written by another human, I don’t have the sense of partipation that I get from prompting GPT.”

    Of course I asked GPT what it thought about these two arguments. I won’t reprint that here – if you find your own answer, I’d recommend you use this prompting approach.

    *     *     *

    Yesterday I listened to an Akimbo podcast – the episode’s entire transcript written by AI and “read” by an AI voice that sounds exactly like Seth Godin.

    Well, that’s not entirely true – in fact, that’s outrageously false.

    Seth wrote the prompts then read GPT’s answer directly.

    The prompts gave no style, substance, or tone guidance, so they yielded typical GPT material: expansive, slightly perky, tirelessly-grammatically-correct, well-informed, and formulaically neutral to the point of sounding indecisive.

    But the prompts created the answers; thus the prompts created the episode. Thus, Seth Godin created the episode, just with a different tool than before.

    And the answers weren’t bad. I won’t track them down, because I got them myself from ChatGPT. (Sidebar: people should do this more).

    My question was almost identical to the one Seth asked:

    What are 7 ways in which large language models could be detrimental to human society?

    and the answers varied only slightly:

    1. Reinforcing existing biases: Large language models are trained on vast amounts of data, much of which reflects the biases and prejudices of human society. As a result, these models may perpetuate and reinforce harmful biases in areas such as race, gender, and socio-economic status.
    2. Spreading misinformation: Large language models are capable of generating text that is highly convincing and sophisticated, which could be used to spread misinformation and disinformation on a massive scale.
    3. Undermining privacy: Large language models require access to vast amounts of data in order to function effectively. This data may include sensitive information about individuals and could be used for surveillance or other invasive purposes.
    4. Displacing human jobs: Large language models have the potential to automate many tasks that are currently performed by humans, which could lead to significant job displacement and economic disruption.
    5. Exacerbating inequality: The benefits of large language models may not be evenly distributed, and some individuals and communities may be left behind or even harmed by their widespread use.
    6. Contributing to environmental degradation: Large language models require significant amounts of energy to train and operate, which could contribute to the carbon footprint and environmental impact of the technology sector.
    7. Decreasing human creativity: As large language models become more advanced, there is a risk that they may displace human creativity and originality in areas such as writing, art, and music, leading to a loss of human culture and expression.

    Excellent and thought-provoking answers, worthy of inclusion in an Akimbo podcast. 

    And there’s the bigger point: it’s not the answers that matter, it’s the question. I don’t mean just with language models, but with every other form of intelligence, namely yourself, other humands, and for some, God or Gods, which you may or may not think of as comprised by yourself or collective unsconscious. Whomever you ask, your ability to compose questions is, if not everything, pretty dang important.

    With possible present-bias, the AI crowd has appropriated this ancient skill and rebranded it “prompt engineering”. The Akimbo episode above is a prompt engineering showcase.

    PS. Want to try it yourself? Copy this essay into ChatGPT, then below it paste any of the 31 prompts I created on this page: https://www.rowanprice.com/prompt/31-summary-prompts/

     

     

     

  • Building a valuable software product

    Building a valuable software product is like building anything else (such as productized services, which I wrote about a few times back in 2019): build it based on your expertise and a sense of empathy for whomever you want to help.

    Except you should assume that if you do it the wrong way, it could waste a small fraction of someone’s life. And wasting even the tiniest fraction of someone else’s life may not be acceptable to you.

    Ok, but how do you do that? Here are some bigger-picture principals or attitudes which it might benefit you to embrace:

    • Soft and tender when business-as-usual is hard and harsh
    • Fluid and flexible when business-as-usual is slow-moving and rigid
    • Melancholic when business-as-usual is forced optimism
    • Poetic when business-as-usual is explicit and numbers-oriented
    • Deep when business-as-usual is efficient
    • Ecological when business-as-usual is purely economical
    • Asking beautiful questions instead of having all the answers

    Of course, that’s too abstract to be useful. Useful is best captured by YC partner (and c0-founder of Justin.TV/Twitch), Michael Seibel.

    TLDR:

    1. (Re)-launch quickly
    2. Get initial customers
    3. Talk to customers and use their product feedback

    (also: rinse and repeat)

    That’s pretty much it. Simple and straightforward but hard and maybe a little complicated, in an emotional way at least. Because you have to talk to people. That’s sort of the throughline: can you talk to people and talk to them early, before you have something that you know they’ll like.

    If there’s throughline #2 here, it’s infinite game but agile infinite game. Play not to win but to iterate quickly through the 3 steps, to make sure you’re in constant conversation with people. There are few kinds of software entrepreneurship in which conversation is not the central theme of your business.

    Yet, you also want to talk about something – some product not just an idea of a product. For me at least – your mileage may vary but for me, to keep the principle of never wasting a moment of someone’s life, you should propose to discuss something that actually does something, anything.

    This helps you be avoid hiding in creating a product. Fluid and flexible works here. So does soft and tender; soft and tender makes you receptive to what really hurts people, as does asking beautiful questions – but also to what hurts yourself.

    Being deep when business as usual is superficial helps you get initial customers; it lets you package your introduction with some kind of substantial observation, invitation, or offer.

    Not sure where to start? One option is to ‘return to start’: delete your website, replace it with one headline and one blurb, and make a list of people to talk to.

    Already have too much website to ‘return to start’? Make your homepage your about page, and simplify the home page per the suggestion above.

    When the time has come for more detail, you’ll know.

     

     

  • Brand book

    A brand book guides and defines a company’s brand identity and positioning. It typically includes information such as the company’s mission and vision, brand values, tone of voice, and visual identity guidelines. The brand book is a strategic tool and like a message map it helps to ensure the consistency and coherence of the company’s visual and verbal identity across all touchpoints. Unlike a message map, it doesn’t necessarily specify audience-personalized unique value propositions, product positioning, and other important talking points

  • Examples of Intelligence

    2023 appears to be a breakout year for intelligence of the artificial kind so what indicates true intelligence has been on my mind.

    Speaking of both subjects, I was little disappointed with the GPTs (ChatGPT and GPT3.5 collectively).

    I asked them:

    1. What are the top three signs of intelligence?
    2. What are three useful criteria for assessing someone’s intelligence?
    3. \What are real-world examples?
     
    This yielded disappointing results not worth reprinting.
     
    But it also induced me to write a more intelligent prompt. I will print it with the prior three prompts, because as far as the text generation algorithm is concerned, a prompt consists of itself plus all prior prompts within the same communication window.
     
    So this was the de facto prompt:
    1. What are the top three signs of intelligence?
    2. What are three useful criteria for assessing someone’s intelligence?
    3. What are real-world examples?
    4. What are real world examples that you notice in the context of conversation with someone you don’t otherwise know anything about? Like someone you have met at a party and struck up a conversation with. What are objective “tells” (in the sense of poker tells) that indicate this person’s intelligence?

    With this prompt, I finally got some decent answers.

    GPT-3:

    ChatGPT:

    These are pretty good indicators of intelligence (and also of conversational ability).

    But they don’t provide specific examples. Which is what I asked for.

    As in the ability to summon and articulate an abundance of relevant, interesting, and  thought-provoking examples.

    The reason that the GPTs fail to provide examples is not that they can’t think, though they can’t. The reason is either that good examples “in the wild”, ie the texts GPT trained on, are rare – or that it is hard for the GPTs to distinguish from the millions of so-so examples. Or both.

    *    *    *

    There’s one sentence from Venkatesh Rao’s Art of Gig that won’t unstick itself from my brain: “Your advice is only as good as your examples“. 

    In a way, the whole book is about providing intelligence as a service, in that it is a book about strategy consulting and the basic job of a strategy consultant, the common denominator, is to be intelligent in service of client goals. (This is also in the book).

    The brainstuck sentence mentioned above comes from a section of Art of Gig entitled, “100 Consulting Tips”. If the assertion above is true, then we can think of this section of the book as “100 Tips for being intelligent”.

    Here’s the 36th tip:

    “Examples, examples, examples. Your advice is only as good as your examples. Collect examples everywhere, from all sources. Half your value lies in being an encyclopedia of examples, with ready access to greater volume, velocity, and variety than employees”.

    I will also reproduce his 37th tip because it’s related.

    “Read up on classic and cliché examples commonly cited in your consulting niche and have something fresh to say about them. Examples: South Airlines for strategy consultants, iPhone for design consultants, AlphaGo for AI, BP futures for futurists”

    So the accomplished strategy consultant and author says examples are at the heart of intelligence. 

    To be fair, education, work experience, and personal habits or practices also affect one’s ability to provide examples. For example, if you have a practice of taking notes and even writing about them, you be more likely to provide them as examples to something when the time comes. 

    *    *    *

    As automated content generation AI begins to seep into every part of our lives, people are going to think more carefully about whether it comes from an intelligent being.

    And they might start to be more particular about asking for examples.

    I asked the GPTs for some examples of intelligence-revealing examples (sorry) with this elaborate:

    Suppose the conversation is about the difficult of finding places to travel to without commercialized tourism. If this is the current premise of a conversation, what surprising and interesting idea might someone subsequently contribute that changes the pallor of the conversation Also, can you give a few examples of remarks this person might make in which they provide good examples to illustrate their idea? Specifically, what examples might they provide that indicate their intelligence?

    The GPTs gave this a pretty good answer.

    New idea – dark tourism.

    Surprising and interesting example – over-commercialized concentration camp, battlefield, or memorial).

    But I wanted a specific example. That’s what Christopher Hitchens would give me at a party right. Not some abstract possibility.

    So I asked for a joke.

    Oof, not a good joke; it’s insensitive and inconsiderate to a large class of people. It’s not a joke that an intelligent person would make. 

    The comedian will be the last job on earth replaced by AI.

    Meanwhile, to avoid drowning in the mediocrity of AI generated content we will demand, more interesting examples. Conversely, we’ll hone our example-having abilities.