Entries

  • George Washington on soloentrepeneurship

    My observation is that whenever one person is found adequate to the discharge of a duty… it is worse executed by two persons, and scarcely done at all if three or more are employed therein.

    In no area is this so true as messaging and copywriting.

    Consensus doesn’t work.

    This is why large consulting firms homepages are as interesting as cardboard – but smaller ones sometimes achieve snappiness.

    Nothing wrong with a team, tbt, but each unto their own task.

     

     

     

  • Endowing ideas with examples

    In Art of Gig, Venkat said “Examples, examples, examples”.

    For what it’s worth, it’s the only point in the book where he repeats a word three times.

    his point was that examples are table stakes for an expertise-provider, such as a strategy consultant or a B2B software product.

    Examples are like currency, he says.

    It makes me imagine a coin with an idea on one side and an example on the other.

    An unexampled idea lacks money-weight; it floats away like a helium balloon not tied to a wrist.

    Not to get meta, but let’s look at an example of an exampled idea.

    Core DNA provides “headless CMS”, which is an important part of the tech economy.

    First, let me set the stage with some general background: organizations create and manage lots of content – of many kinds for many kinds of people for many purposes. Maybe it’s content for tourists. Maybe it’s food menu content. Maybe it’s health information. Maybe it’s complex ecommerce information about, idk, furniture.

    You can create all of that with WordPress-type CMS. But wait, do I then have to also display it on a WordPress-type site?

    Not if it’s a headless CMS like Core DNA: with this tool content managed in one place can be displayed anywhere. Even really complex ecommerce content.

    What you read above, that’s the unexampled idea.

    Hold it in your mind for a moment, then read through some concrete examples of the Core DNA headless CMS in action:

    Does the idea of headless CMS feel a little more solid now?

  • Messaging statement

    In business, a messaging statement captures a key value proposition in a thorough and exhaustive manner and the expense of punchiness.

    Useful for certain situations and audiences but generally not ideal in sales and marketing.

    Example messaging statement:

    “Box provides secure collaboration solutions for businesses, enabling them to work safely and efficiently with anyone, on any device.” 

    Compare the above to example ‘messaging‘ – capturing the same value proposition but more to the point and more effective:

    “Secure collaboration with anyone, anywhere, on any device:

  • Embracing your fungal destiny

    If you what you do requires you to think, then generative AI cannot do what you do – not now, not ever.

    Does that mean that some future form of AI won’t be able to think? Of course but let’s cross that bridge if and when we come to it.

    So you’re safe. Or are you?

    Where it gets tricky is in how much non-thinking work is mixed in with our thinking work.

    And how much of our thinking work is a little stale.

    The gen AI opportunity for knowledge workers making digital products is multi-fold:

    • use it probe for digital manual labor in your work, like an oyster mushroom performing mycoremediation on soil full of toxins
    • integrate it into your products and services to alleviate digital manual labor for users
    • use it to help you think more clearly

    I think the question to this is “yes but how”?

    There are many answers but the one my mind stumbles on first is examples.

    What better use of gen AI in helping you think more clearly than helping you chase down examples.

    For example, how exactly have oyster mushrooms been used in the way references above? After the massive 2017 California forest fires, the Fire Remediation Action Coalition laid 40 miles of oyster mushroom tubes over the landscape to break down toxins.

    This is a concrete example but it doubles as a good metaphor; we might need to let gen AI cleanse our minds of the industrial-age mindset, like a fungal network rendering toxic soil fertile.

     

  • The fuzzy line between messaging and copy

    Bad messaging and bad copy read the same: wordy, incoherent, boring – leaves no dent in your brain.

    But when they are good, a distinction emerges. And they work well placed alongside each other.

    It’s good copy if it makes people think or do something. For example, the announcement bar on box.com: Introducing Box AI! New intelligence capabilities will help you unlock the value of your content. Learn more

    Messaging might make you want to do something too, but it also harmonizes with a company value proposition. Thus, it makes sense in a strategy document OR in marketing materials. For example, the subheadline on box.com: “Secure collaboration with anyone, anywhere, on any device”.

    The value prop is clear – how are we different from Dropbox and Google Drive ? Collaboration. Messaging doesn’t just make you do something now – it sticks in your head and makes you do something 10 years from now.

    Beware of messaging or positioning statements though. They can get too wordy. For example: “Box provides secure collaboration solutions for businesses, enabling them to work safely and efficiently with anyone, on any device.” That’s not gonna stick for 10 years.

    Statements are sometimes useful for internal alignment. And they may work for a captive audience, like a private call with a potential partner.

    But even then, default to the pure form of messaging – it’s easier on the brain and closer to your strategy.

  • Is Seth Godin right – are we devoid of self as ChatGPT?

    Seth Godin, back with another banger on his 7,794th consecutive day of publishing: https://seths.blog/2023/05/our-homunculus-is-showing/

    This is an usually weighty and opinionated Seth Godin post. If you like cognitive science, advanced AI, Talmudic neo-Platonism, Jungian philosophy, and Buddhist thinking – let’s just call it non-dualism, then you’ll want to read this. What follows below is my LinkedIn comment on it, verbatim:

    I loved this nuanced take on AI; it’s neither for nor against it. Speaking for myself, I crave this right now, as I am just trying to understand better.

    One thing I get out of this is that AI should enhance our thinking but cannot think for us.

    Also, that we can incorporate AI into our work, or products or office tools, but we have to be aware that there’s no thinking person doing any work – and if we make products/works, we hve to be transparent with users/audience about this.

    There’s also a spiritual / philosophical angle in this article that I am not sure how I feel about. The idea seems to be that we anthropomorphize an ego onto AI – because we cling to our own ego. That I get, but does that mean our ego/self and free will doesn’t exist?

    _”We’re simply code, all the way down, just like ChatGPT.

    It’s not that we’re now discovering a new sort of magic. It’s that the old sort of magic was always an illusion.”_

    Is that true?

    I’m agnostic on this point but it’s interesting and it’s something that could influence how I integrate gen AI into the products I work with.

    If both we and AI are just code all the way down, do we just merge our codebases?

  • A pre-obituary for google>copy>paste

    First, I’m citing an article by a guy named “Packy”. Can we just let that beautiful oddity sink in for a moment…? Thank you. In Packy McKormick’s article “Intelligence Superabundance” he explores a dominant meme and its offshoot-principles, laws, paradoxes, etc.

    From Induced Demand: you build more trains, more people take public transport..

    To Parkinson’s Law: budget more time to a project and the project ends up taking almost exactly that long; budget less time, less time need..

    To Moore’s Law: microchip compute capacity doubles every two years –  demand increases (note: not actually in Packy’s article, but it fits)..

    To Jevons Paradox, when technology innovation improves efficiency, total resource consumption increases anyway..

    After setting these ideas before us, Pack McKormick then does something pretty cool: he applies it to AI.

    He asks: what if an abundance of intelligence leads to increased demand for intelligence?

    If so, then will more intelligence be demanded of our work?

    This questions falls in line with Sam Altman’s congressional testimony: Yes, I will take your jobs – but I will create newer jobs in their stead, and better ones.

    Now flip that.

    If people are supplying and demanding more intelligent work, there’s less room for humdrum bullshit work.

    In fact, let’s take it a step further and, as Kipp Bodnar puts it, disrupt yourself now of the bullshit work – the repetitive and brainless work tasks – before the robots disrupt it for you.

    The design of business software and services should be predicated on this principle – that the era of manual digital labor (“google it, copy it, paste it”) is over.

  • A surprise conclusion after investigating the productized services spectrum

    There’s a productized services spectrum.

    On one end, sat right next to custom 1:1 services, is something that hovers just above it: the habit or ability to regularize services. This looks like a price, a timeframe, inputs, and outputs all packaged together. Over time, they might all flow nicely within a single sentence uttered during a client call.

    Moving closer to the center of the productized services spectrum, your thing-you-do gets written down, put on a web page, mayb given a name.

    Go a bit further and perhaps it acquires the polish and patina of a brand. Maybe it even has self-service ecommerce.

    But how about all the way over on the other side of the spectrum?

    If you travel far enough on the productization spectrum, this truth happens: you have a “product”.

    Let’s flip that: all b2b products are actually productized services.

    Look at Google: GCP and its B2B offerings are now in the 10s of billion in annual rev. Sure, Google would rather die than provide services to consumer customers, but it’s business customers get the usual client services interactions:

    • strong relationships are built
    • deep understanding of customer needs
    • help executing on strategy so that needs are met

    Everywhere you go, it’s the same story: you’re in the relationships game; forming good ones is a pretty solid product strategy.

  • Virtual assistant

    I love this mural; it roughly translates to “To. be born, Portugal, to die, the world”. It’s on Rua Sao Bento, in Lisbon – a fact which I found impossible to determine by using Google, Google Maps, Bing, or ChatGPT.

    How do I know it’s on Rua Sao Bento then? I used a sort of virtual assistant, an “agent”, called BabyAGI. You can try it here: https://babyagi-ui.vercel.app/ with an API to an LLM like OpenAI’s.

    Basically BabyAGI (questionable name) automates the combined and focused searching of the web in an iterative process, against a goal that you set for it.

    In my case, the goal was, “find the street in Lisbon, Portugal in which the mural «Para nascer, Portugal: para morrer, o mundo.» is located”.

    I also gave it a first step: find references to this mural online in English and in Portuguese. In concrete terms, that meant it would perform a Google search, click on result, read results, and pass the results back to the language model for analysis,

    As it does all this, it tries to fulfill the goal. If it does, it stops.

    If it doesn’t fulfill the goal, and this is the key – it creates its own next step, using AI. And it keeps doing this, over and over again, to infinity (or for many steps as you limit it too).

    There’s a hurricane of AI information this year but I feel that this is one of the most important pieces of it. It takes a little practice to use an agent like this, as with Google or ChatGPT. It also takes a little more planning.

    But if you can figure out how to delegate parts of your work to it, then maybe you’ll figure out how your products and services solutions can do the same.

  • The shark as a metaphor for messaging

    What’s tricky with product messaging is that the product, like interest rates and the universe itself, is in a state of perpetual change.

    Growth is the most common type of change.

    • When Hubspot started it was an inbound marketing and lead generation platform; something closer to ConvertKit. Now it’s grown into a CRM used for sales, marketing, and customer support/services, a CMS, an enterprise ESP, and has sales enablement and other tools through its large app store.
    • When Slack launch, it was basically an updated version of Campfire – a team messaging platform. Now it’s a course classroom venue and cozy web venue for community cultivation (among other things).
    • When ChatGPT started, it was a multi-purpose conversational AI chatbot. Now we understand that it’s natural language UI that lets you control multiple software products and databases through plugins.

    It’s the depth-to-breadth product evolution. (Usually; sometimes it goes the in other direction, as with Basecamp).

    Either way, the messaging and the product itself are moving together in imperfect parallel. And that’s cyclical:

    • Sometimes, the messaging comes before the product feature. This is the essence of the landing page feature validation
    • Other times, the product feature comes first the messaging
    • Sometimes, messaging starts in the FAQ then becomes a headline

    Like a shark that never sleeps, product messaging is never perfect, never fully right or wrong, and hard to pin it to the wall for more than a couple months. But that doesn’t mean it can’t be powerful.