During Thursday’s Forrester-sponsored Twitter conference (“Tweet Jam”) on “What Business Intelligence Is and Is Not” (search Twitter for #dmjam), I remarked that BI in general faces the problem that “people ask the wrong questions,” and that this will get worse with the impact of social media, applications, platforms, and patterns. I was challenged to provide some examples, but:
What I mean by “the wrong questions” is a problem that afflicts BI already, a peculiarly active form of the more general problem known as “confirmation bias,” that becomes more acute in the presence of “social media” dynamics.
You always come to BI with some sense of understanding how things work, some guesses at what you’re going to find. Often, your BI serves primarily to confirm your suspicions—and that’s a good thing, because it empowers you to move more forthrightly in the right direction. A problem arises, however, when your guesses and suspicions are actually wrong: because you frame the questions and interpret the answers, there’s still a strong tendency for the BI to confirm. Confirmation of your wrong guesses and misplaced suspicions is still going to empower you to move forthrightly, but in these cases, you’ll be moving in the wrong direction! Analysts generally understand this; decision-makers sometimes don’t. A critical part of the business analyst’s job is examining additional analyses that have the potential to disprove current guesses and suspicions. If they hold up, if you fail to disprove, great: you’re on the right track; you might not even show those results to the decision-maker. If you find holes, though, then even better: you’re saved from walking into a pit—or you will be, as soon as you show and explain them.
All that applies already, in “conventional BI.” But “social media” introduce a new problem: cooperative structures (like the OAuth system for delegated authentication) mean that some of the information you’d like is primarily in someone else’s hands. It may be that the protocol and partnerships allow you access to this extra information, but can you afford the performance and programming costs to collect it? If, as an analyst, you’ve routinely been looking at more data than you show the decision-makers (and you really should!), then you have a hard job ahead explaining why you suddenly need to spend so much more to collect information they’ve never seen! There’s a risk that you’ll only have access to the core information, the information that can only confirm guesses and suspicions that were already held at the time the system and partnerships were designed: you might lose richness of the data necessary for those counter-guess tests.
Some examples, since that was the challenge:
Imagine a company that offers a free download of a limited version of their product. For a variety of reasons, you institute a registration system for the downloads. It’s a common enough arrangement: it increases potential customer exposure to the product, it identifies better-qualified leads, and it provides great, instant feedback on the effectiveness of new releases and ad campaigns. You’re going to mine those registrations and downloads for all the BI you can squeeze out. Lots of obvious, guess-confirming questions, there. But here’s a guess-denying question that should be asked (hoping, of course, that the answer is “no”): do people have some other, non-registered way to download the same files? Because if they do, then a lot of them are going to find it. You need the info on registered downloads, but you also need the comparative info on downloads that bypass the registration. Someone who thinks they know the system is likely to assume—without thought—that there aren’t any unregistered downloads; someone else needs to wonder and check.
Filed under: Business Intelligence | 3 Comments
In the 1980’s (you remember the 1980’s … invention of steam power, extinction of the dinosaurs?), a number of computer hardware makers took trench warfare into new realms, using “standards” for bits of the UNIX operating system to support their competition over senselessly divergent other bits. They actually had conflicting “standards.” They even had a slogan, “waging standards,” for what they were doing to each other. Generally speaking, they managed to kill each other off, which is often what happens when you choose the metaphors of warfare to describe your business.
A key tactic in the waging of standards was this:
- Someone achieves momentary supremacy in some niche of the market.
- Two to ten competitors get together and write a “standard” whose text is, basically, “different from those guys.”
- The two-to-ten mount a massive advertising blitz to convince customers that a standard whatever-we’re-fighting-about is inestimably better than whatever “those guys” have, even if that one is actually better than the standard one. “Standard is better than better.”
- The market shifts fractional points, catastrophe (i.e., “those guys” succeeding) is averted, business is lost by all players, and all is right with the world.
The problem with this system … well, there are lots of them, but the main one is that it’s all about destroying business rather than generating it. Oh, sure, the business you destroy is someone else’s, but anyone can play the game, so everyone, sooner or later, gets to be “them.”
We grew out of that. Or, it expired in exhaustion. Or, it combusted of its own follies. I forget which; I may have blinked. Now, at any rate, we have Linux, which solves the “whose UNIX is it, anyway” conundrum by neatly allowing it to be everyone’s.
Except that “hope springs eternal in the human breast,” as Alexander Pope pointed out, and since we can’t compete on destroying UNIX any more, we really need to find something else. Let’s see now, what could it be?
Oh, I know! Social Networking! Or, as ReadWriteWeb calls it, Cracking Facebook’s Dominance. All the pieces are there: a momentary leader, a gang of also-rans, a standard mysteriously composed of “everything the also-rans have in common” without any of that messy “how the leader does the same thing.” What’s not to love?
Filed under: Business and other profanities |
Stormy Peters thinks Amazon could get more consumer business by opening their format (Stormys Corner: Amazon, let me give you more money!). But I think there might be bigger stakes than that!
Over the weekend, I read that California has now approved the use of “electronic text books” in class, instead of the old-fashioned dead-tree kind. If you live in California (perhaps other areas as well), your first thought, like mine, may have been “Well, that will be the death of the giant-backpack-with-wheels-and-towing-handle industry!” California’s approved textbooks have been creeping up on the Oxford English Dictionary with each new cycle, and watching a 3rd-grader drag around a graduate-studies research library like a caboose has become a constant regret.
But, no more! The California legislature has allowed California schools to use electronic books instead of print. You can get that whole back-pack into an Amazon Kindle, or a Barnes & Noble Nook, or a Sony Reader.
Or, can you? You see, the legislature really only authorized the state Superintendent of Schools to find suitable e-books. And given the history of centrally ordered, custom manufactured, OED-busting gigabooks, the question will inevitably come up, “Which format shall we use?” And if you own a Kindle but the state chooses the Nook (or any other mismatched combination), well then, no, you won’t be able to get those books onto that device.
It seems like this e-textbooks idea has a very large chance to drive something approaching monopoly in the device market, unless the device vendors can be persuaded to use an open format. PDF is a possibility, but not so attractive as you might suppose, since it tends toward a specific page layout, yet these devices have different size screens: if you scale the page so it fits, you may not be able to read it. I’m sure that can be avoided, but at the least it means a constant QA effort to ensure that every page displays reasonably on every device. Which I’m not sure I see a textbook publisher doing. So we might end up with only one supported reader, anyway.
(This all sounds so familiar … oh, wait … the HTML Wars!).
Filed under: Gov 2.0, Toys | 7 Comments
All right, deep breath, count to ten, and all that: Twitter’s fooling with the community dynamics again, and again they seem to completely miss the main point in a wash of secondaries. At least Evan Williams mentions the essential problem, which is that annotation during retweet is the real value. Indeed, many of us are searching for some way to utterly block the unannotated retweet, from anyone at all … which would include blocking any retweet using the new retweet mechanism, since it completely prevents annotation.
Evan clearly understands the point (see his paragraph, way way down near the bottom, that begins “The other thing some people will not like…”). And he’s clearly heard the outcry already rising over this (that paragraph notes that “it’s possible we’ll build that,” and ends with the plea “This point should not be missed”). OK, Even, we didn’t miss the point, but “it’s possible we’ll build that” isn’t much reassurance.
I think I know why they keep missing what seems to me to be the primary point. Here’s a completely convincing and authoritative graph, composed of numbers I totally made up on the spur of the moment:
The horizontal axis of the graph represents the average value of a given person’s tweets. The height of the graph represents the number of twitterers whose average value is in this range.
To the left, we see the famous long-tail effect of all those who tried it and abandoned it. We all know there are many of these folks, but really, it doesn’t mean anything, and we don’t bother with it much. You can tell whether a given journalist understands anything at all about Twitter by whether they talk about this (loser), or anything else at all (worth attention). If there’s anything that needs doing about these folks, it would be general “creature comfort” improvements to keep them from bailing out.
In the middle, you easily notice a huge hump of people with moderate-value tweets. These are mostly the social tweeters, the people who flood the @public_timeline with helpful stuff like “Yay, no school,” or “eating second spoonful of ice cream.” This population grades up into music reviews, odd thoughts, man-on-the-street news reporting, and other stuff that has increasing probability of interesting at least one other human on earth (or even, in the twitterverse).
At the right, you have the Tweeters who’ve learned how to get real value out of Twitter. Maybe they’re taking care of a community, maybe they’re organizing a political movement, maybe they’re just actual interesting personalities. Their numbers are much smaller than the middle group, as a few moments browsing the public_timeline will convince anyone, but they’re the real pay-off of Twitter, the thought leaders who’ll keep the traffic going while Twitter works out a business strategy.
The problem I seem to see is that the changes Twitter are making seem to focus on the middle group rather than the right-end group. Twitter should be working to draw the middlers into more dedicated, more continuous involvement. But instead, they seem to target these middlers, making it as comfortable as possible to neither get nor give value, but just dabble.
When Twitter were fixing performance and scalability issues, when the Fail Whale was our best friend, that made sense: go with the numbers. But that stuff’s pretty stable now; Mr. Whale hasn’t left the building, but he’s only a minor visitor these days. Now, Twitter can afford to build tweetership, loyalty, and face-time. Twitter changes should be guiding the middlers into more effective communication, not boxing them into current bad habits.
Filed under: Toys | 3 Comments
The White House – Blog Post – Reality Check has some interesting thoughts on the on-going health-care debate. And I’m afraid I don’t really mean that in a nice way.
First, the blog points out that a recent survey from WellPoint uses obviously “flawed” techniques to make an unsustainable point, as part of a “misinformation campaign designed to confuse and distract attention from those who are seeking real health care solutions.”
Then, the blog responds to “one novel argument worth noting”: the survey claims that “imposing fees on health insurance providers and drug and device makers represents a tax on individuals and families” (because the costs will be passed along to customers). The blog discounts this notion with three arguments:
- The bookkeeping necessary for insurance companies to pass these costs along would be prohibitive. But, excuse me: too much bookkeeping for an insurance company? They live and die, quite properly, on extensive, exhaustive, constantly revised bookkeeping. You think they can’t divide a fee based on their number of subscribers, by the number of subscribers?
- Even if they took the trouble to pass the fee on, other parts of the plan will save them money, so obviously they’ll pass the savings along, too. Again, excuse me? The very people using trumped up surveys to block progress are suddenly going to turn all nicey-nicey? We have what reason to expect this?
- And, anyway, consumers save because of the reduced hidden tax currently paid in care for the uninsured. But this “hidden tax” is paid through the mechanism of providers spreading costs to paying customers—i.e., insurance carriers—and ultimately of course to consumers (the legendary $100 slippers and all that). This last blog argument is so vague, I’m not even clear whether it foolishly supposes health care providers are not actually businesses, or whether it’s just reapplying that faulty logic to the insurance carriers. But either way: the current debate, the very survey that occasions this blog, provides all the evidence you need to know it ain’t so.
Filed under: Gov 2.0 |
Bing and Google are going to start indexing Tweets. They haven’t said so, but I’m guessing they’ll start using tweet insights in their search rankings as well–they already use every byte the can grab in half a dozen ways. The Google blog talks about “the next time you search for something that can be aided by a real-time observation,” which is no doubt a part of it all, but the infamous noise factor of Twitter clearly means that G & B will need to be clever about choosing which tweets to present; why not also, use Tweets to be clever about content from other sources? You know they’ll do it: search ranking is a huge point of competition for them (see BlindSearch for more details on that).
This all resonates in my head (like sticking my head in a church bell) with my experience of the “Balloon Boy” story, last week. You know the one:
- Boy reportedly, may be trapped in accidentally released helium balloon (I heard it on Twitter)
- Parents search their house frantically but hopelessly (I heard it on Twitter)
- Authorities track the balloon (I heard it on Twitter)
- Authorities search for helicopters and ultra-light aircraft to snatch the boy from the basket (I heard it on Twitter)
- Balloon comes down in a field — but no boy! (I heard it on Twitter)
- New search for boy (or, frighteningly probably, remains) along the route over which the balloon had floated (I followed it on Twitter, house by house, block by block)
- Woops! Uh, oh … naughty naughty boy was hiding in the attic! (I heard it on Twitter)
- Authorities let out that they’re pursuing legal action, for perpetrating a hoax (Yup, heard it on Twitter)
- And then Yahoo! News picked up that “a boy, reportedly, was trapped in a helium balloon”!
Hard not to believe that maybe Twitter’s on to something here, don’t you think?
Filed under: Toys |
Well, no one ever doubted AT&T knows where the money is …
Back in May, there were rumors that AT&T might reach out to the iPhone crowd with lower rates. Laterly, though, we’ve heard that AT&T views iPhone customers as “problems,” because they use the 3G services they’re paying for.
Now we find that AT&T’s discovered a more lucrative market: the Amazon Kindle. Because the data charges are hidden inside the book costs, Kindle customers are lulled into believing that their data plan is “free.” But it’s not! Not to the customers, who pay through their book purchase price, and not to Amazon, who actually writes the checks to AT&T. But this party-game of finger pointing serves the necessary smoke-screen role for proper consumer gouging, in a way that the iPhone data plans do not.
When AT&T starts making noises about throttling iPhone data usage, when AT&T is undismayed by a call drop rate deep in the double digits, when your iPhone suddenly, ridiculously claims you’re “not subscribed to a cellular data service,” as mine has ever since last Friday, then as an iPhone user, you know who you’re paying for this service failure, and you’re able to do all those annoying things like calling customer service. This apparently raises expenses at AT&T, presumably because they have to hire many actual people to ignore the calls, instead of a limited few, or perhaps just a little redirect to /dev/null, to presumably ignore Amazon raising similar complaints, if any.
How about you, iPhone user? Are you using more than your fair share? Yes, I know, your data plan says “unlimited,” but apparently “unlimited” stops somewhere, perhaps at the $2 to $8 per MB Amazon’s passing along (I’m guesstimating one book at about one MB)?
Easy to check:
- Go to Settings
- Go to General
- Go to Usage
- Look down at “Cellular Network Data”
- Divide by how much you’ve paid during the same period for your data plan
Are you “hogging” more bandwidth than 1MB/$8? I’d be interested to collect results in the comments section.
Unfortunately, I can’t contribute my own numbers, as I lost all my history, settings, and downloads hard-resetting the device, hoping to clear the “not subscribed” problem. Fruitlessly.
Filed under: Mythoi, Toys | 3 Comments