Author: remap_content_admin

  • The question of strategy

    “A set of ideas that inspire a move to a position of advantage over a meaningful period of time.”

    To transpose that into bullets:
    1-A set of ideas
    2-that inspire
    3-a move to
    4-a position of advantage
    5-over a meaningful period of time

     
    This is the definition of strategy or strategic that I work with. It’s not supposed to be the “be all end all” definition of strategy. Take it or leave it. But I find it useful in my area of work for asking the questions:
     
    • Is this is actually a strategy?
    • Is this actually strategic?
     
    But are those the right questions? By common practice, yes. Almost always when we create, propose, or implement a strategy, the beneficiary is us.
     
    But as product owners, we need to go deeper – we need to ask: is this actually strategic for our customers?
     
    Forget about whether it’s strategic for you. Instead – is it:
     
    • a set of solutions (ideas) to problems that our customers have
    • that inspire them, or help them imagine a different reality,
    • to move or transform into a different kind of business
    • in a way that gives them a competitive advantage
    • over a period of time that lets them capitalize on all of the above

     

    A strategic solution doesn’t need to do shit for the owners of the solution – it needs to do something  for the people using it.

  • Precision

    Generative AI will force products to be hyper-precise about what they do in their marketing.

    This is a change from SEO thinking, in which marketing content needed to have the right keyword phrases. An interconnected web of content each laced with inter-related with strategic keyword phrases related to how people search for things.

    The idea was, someone might search for “photoshop alternative” in lots of different ways, so include all 13 of them in your online presence. Then measure, adjust, refine, etc.

    I think this imprecise “fishing net” approach to optimizing content for SEO affected

    • other areas of marketing
    • marketing strategy as a whole
    • product strategy as a whole

    LLMs change this. Already they are far better tools than search engines at finding information. Adoption will grow over time; the major search engines are already incoporating LLMs results into their traditional search results.

    But the funny thing is that LLMs will enforce precision in the way you talk about your product. I’m not saying they won’t be gameable, but they won’t care about your cloud of keywords.

    What will work instead is a precise and unique description of your product.

  • Scarcity messaging

    Researchers measured the IQ of over 400 farmers in hot Tamil Nadu, southern India, before and after their annual harvest.

    For context, this harvest yields them 60% of their annual income, in one swoop.

    These farmers are usually poor, but for that one brief period of time, they are flush with cash. They have options.

    The researchers found that the farmers’ IQ increases by 10% after their harvest cash comes in.

    Then, as in Flowers for Algernon, their IQ plummets as they descend again into poverty. When cash is scarce, the IQ drops.

    The same researchers found similar results among a similar-sized group of low-to-middle income shoppers at a mall in New Jersey. One sample group faced a hypothetical $1,500 car repair bill. The other faced a $150 car repair bill.

    Those faced with the $1,500 car repair bill lost 14 IQ points – instantly.

    One explanation: as the mind races with stress hormones, cognitive capacity shrinks.

    So inculcating scarcity into your messaging might work – and as the study’s authors point out, poverty begets poverty. So it might work multiples times over a lifetime engagement.

    But here’s a more interesting product messaging challenge: selling to people at their smartest.

  • George Washington on soloentrepeneurship

    My observation is that whenever one person is found adequate to the discharge of a duty… it is worse executed by two persons, and scarcely done at all if three or more are employed therein.

    In no area is this so true as messaging and copywriting.

    Consensus doesn’t work.

    This is why large consulting firms homepages are as interesting as cardboard – but smaller ones sometimes achieve snappiness.

    Nothing wrong with a team, tbt, but each unto their own task.

     

     

     

  • Endowing ideas with examples

    In Art of Gig, Venkat said “Examples, examples, examples”.

    For what it’s worth, it’s the only point in the book where he repeats a word three times.

    his point was that examples are table stakes for an expertise-provider, such as a strategy consultant or a B2B software product.

    Examples are like currency, he says.

    It makes me imagine a coin with an idea on one side and an example on the other.

    An unexampled idea lacks money-weight; it floats away like a helium balloon not tied to a wrist.

    Not to get meta, but let’s look at an example of an exampled idea.

    Core DNA provides “headless CMS”, which is an important part of the tech economy.

    First, let me set the stage with some general background: organizations create and manage lots of content – of many kinds for many kinds of people for many purposes. Maybe it’s content for tourists. Maybe it’s food menu content. Maybe it’s health information. Maybe it’s complex ecommerce information about, idk, furniture.

    You can create all of that with WordPress-type CMS. But wait, do I then have to also display it on a WordPress-type site?

    Not if it’s a headless CMS like Core DNA: with this tool content managed in one place can be displayed anywhere. Even really complex ecommerce content.

    What you read above, that’s the unexampled idea.

    Hold it in your mind for a moment, then read through some concrete examples of the Core DNA headless CMS in action:

    Does the idea of headless CMS feel a little more solid now?

  • Messaging statement

    In business, a messaging statement captures a key value proposition in a thorough and exhaustive manner and the expense of punchiness.

    Useful for certain situations and audiences but generally not ideal in sales and marketing.

    Example messaging statement:

    “Box provides secure collaboration solutions for businesses, enabling them to work safely and efficiently with anyone, on any device.” 

    Compare the above to example ‘messaging‘ – capturing the same value proposition but more to the point and more effective:

    “Secure collaboration with anyone, anywhere, on any device:

  • Embracing your fungal destiny

    If you what you do requires you to think, then generative AI cannot do what you do – not now, not ever.

    Does that mean that some future form of AI won’t be able to think? Of course but let’s cross that bridge if and when we come to it.

    So you’re safe. Or are you?

    Where it gets tricky is in how much non-thinking work is mixed in with our thinking work.

    And how much of our thinking work is a little stale.

    The gen AI opportunity for knowledge workers making digital products is multi-fold:

    • use it probe for digital manual labor in your work, like an oyster mushroom performing mycoremediation on soil full of toxins
    • integrate it into your products and services to alleviate digital manual labor for users
    • use it to help you think more clearly

    I think the question to this is “yes but how”?

    There are many answers but the one my mind stumbles on first is examples.

    What better use of gen AI in helping you think more clearly than helping you chase down examples.

    For example, how exactly have oyster mushrooms been used in the way references above? After the massive 2017 California forest fires, the Fire Remediation Action Coalition laid 40 miles of oyster mushroom tubes over the landscape to break down toxins.

    This is a concrete example but it doubles as a good metaphor; we might need to let gen AI cleanse our minds of the industrial-age mindset, like a fungal network rendering toxic soil fertile.

     

  • The fuzzy line between messaging and copy

    Bad messaging and bad copy read the same: wordy, incoherent, boring – leaves no dent in your brain.

    But when they are good, a distinction emerges. And they work well placed alongside each other.

    It’s good copy if it makes people think or do something. For example, the announcement bar on box.com: Introducing Box AI! New intelligence capabilities will help you unlock the value of your content. Learn more

    Messaging might make you want to do something too, but it also harmonizes with a company value proposition. Thus, it makes sense in a strategy document OR in marketing materials. For example, the subheadline on box.com: “Secure collaboration with anyone, anywhere, on any device”.

    The value prop is clear – how are we different from Dropbox and Google Drive ? Collaboration. Messaging doesn’t just make you do something now – it sticks in your head and makes you do something 10 years from now.

    Beware of messaging or positioning statements though. They can get too wordy. For example: “Box provides secure collaboration solutions for businesses, enabling them to work safely and efficiently with anyone, on any device.” That’s not gonna stick for 10 years.

    Statements are sometimes useful for internal alignment. And they may work for a captive audience, like a private call with a potential partner.

    But even then, default to the pure form of messaging – it’s easier on the brain and closer to your strategy.

  • Is Seth Godin right – are we devoid of self as ChatGPT?

    Seth Godin, back with another banger on his 7,794th consecutive day of publishing: https://seths.blog/2023/05/our-homunculus-is-showing/

    This is an usually weighty and opinionated Seth Godin post. If you like cognitive science, advanced AI, Talmudic neo-Platonism, Jungian philosophy, and Buddhist thinking – let’s just call it non-dualism, then you’ll want to read this. What follows below is my LinkedIn comment on it, verbatim:

    I loved this nuanced take on AI; it’s neither for nor against it. Speaking for myself, I crave this right now, as I am just trying to understand better.

    One thing I get out of this is that AI should enhance our thinking but cannot think for us.

    Also, that we can incorporate AI into our work, or products or office tools, but we have to be aware that there’s no thinking person doing any work – and if we make products/works, we hve to be transparent with users/audience about this.

    There’s also a spiritual / philosophical angle in this article that I am not sure how I feel about. The idea seems to be that we anthropomorphize an ego onto AI – because we cling to our own ego. That I get, but does that mean our ego/self and free will doesn’t exist?

    _”We’re simply code, all the way down, just like ChatGPT.

    It’s not that we’re now discovering a new sort of magic. It’s that the old sort of magic was always an illusion.”_

    Is that true?

    I’m agnostic on this point but it’s interesting and it’s something that could influence how I integrate gen AI into the products I work with.

    If both we and AI are just code all the way down, do we just merge our codebases?

  • A pre-obituary for google>copy>paste

    First, I’m citing an article by a guy named “Packy”. Can we just let that beautiful oddity sink in for a moment…? Thank you. In Packy McKormick’s article “Intelligence Superabundance” he explores a dominant meme and its offshoot-principles, laws, paradoxes, etc.

    From Induced Demand: you build more trains, more people take public transport..

    To Parkinson’s Law: budget more time to a project and the project ends up taking almost exactly that long; budget less time, less time need..

    To Moore’s Law: microchip compute capacity doubles every two years –  demand increases (note: not actually in Packy’s article, but it fits)..

    To Jevons Paradox, when technology innovation improves efficiency, total resource consumption increases anyway..

    After setting these ideas before us, Pack McKormick then does something pretty cool: he applies it to AI.

    He asks: what if an abundance of intelligence leads to increased demand for intelligence?

    If so, then will more intelligence be demanded of our work?

    This questions falls in line with Sam Altman’s congressional testimony: Yes, I will take your jobs – but I will create newer jobs in their stead, and better ones.

    Now flip that.

    If people are supplying and demanding more intelligent work, there’s less room for humdrum bullshit work.

    In fact, let’s take it a step further and, as Kipp Bodnar puts it, disrupt yourself now of the bullshit work – the repetitive and brainless work tasks – before the robots disrupt it for you.

    The design of business software and services should be predicated on this principle – that the era of manual digital labor (“google it, copy it, paste it”) is over.