Striking numbers on the disappearance of “productive jobs”. There is probably a healthy and passionate debate to be had on this topic, I’m not sure the situation as it is described here is true. But beyond the number themselves, the trend looks pretty clear to me.
A recent report comparing employment in the US between 1910 and 2000 gives us a clear picture. Over the course of the last century, the number of workers employed as domestic servants, in industry, and in the farm sector has collapsed dramatically. At the same time, “professional, managerial, clerical, sales, and service workers” tripled, growing “from one-quarter to three-quarters of total employment.” In other words, productive jobs have, just as predicted, been largely automated away (even if you count industrial workers globally, including the toiling masses in India and China). […]
we have seen the ballooning not even so much of the “service” sector as of the administrative sector, up to and including the creation of whole new industries like financial services or telemarketing, or the unprecedented expansion of sectors like corporate law, academic and health administration, human resources, and public relations. And these numbers do not even reflect on all those people whose job is to provide administrative, technical, or security support for these industries […]
These are what I propose to call “bullshit jobs.”
Why we should not overlook China’s capability to innovate:
“China’s ability to innovate is massively underrated, particularly in America. […] The biggest 3D printer in the world is in China, and what are they using it for? To build aircraft parts for the Chinese homegrown airliner that’s going to take on Boeing and Airbus. And Boeing and Airbus laugh at this going, ‘Don’t you know it takes decades to develop and we don’t have to worry about that’. This is what people said about Japanese watches and Japanese radios and look what happened – Sony took over that industry. This is all very much worth keeping an eye on, and I think there’s a real danger of complacency in the West when it looks at China.”
All this buzz around Burning Man makes me wonder how the festival will survive this overload of attention. Events are like vacation spots, they have two distinct lives: there is before they are known, and there is after, when the buses of French/German tourists (vacation spots) or marketers/salesmen (events) arrive and make the whole experience different.
It’s one of the contradictions you have to deal with when building an event series. You start with a set of values and a precise goal. That usually attracts a small, coherent community. Then things grow: you have more means to accomplish your mission, your impact is bigger, you welcome new people to the community.
But there is this moment when growth starts to distantiate yourself from your ideas. When you start losing the intimate touch of the beginnings. When the community begins to include people with opposite objectives and values. It’s usually the time for (some) early followers to leave, and you hear the event’s veterans say “those first editions where so much better, the original spirit is now lost”.
Burning man is at that crossroad, where it will have to choose between becoming more popular and mainstream, or staying out of the spotlight. It will be an interesting evolution to follow. I wonder if some early participants have already started their small spin-off somewhere, a place only 500 people know about.
Have you heard of YxYY? It’s an event that was started by early SxSW attendees who did not recognize themselves in what the Austin megafest has become:
“SXSW has lost its early energy amid its increasing size and rapid transformation into a marketing free for all. Why not pull together a smaller, more focused event, and have it be all the things that SXSW no longer was?”
SxSW is Ibiza in 2010, YxYY Koh Samui in the seventies. Where will you go spend your next week off?
I just finished the first day of teaching for the group of 85 years old I am teaching social media to (background info here). A few thoughts after spending four hours with six amazing grandmas:
- We younger users do not realize how some things that we do intuitively are TOTALLY remote for a person who has never seen a computer. We spent 90 minutes explaining the (on-screen iPad) keyboard, and navigating inside text. This is SO complex! I do this with my eyes closed and never realized there are so many different gestures. In the notepad for example: there is tapping to select a word directly. Tapping to make the menu proposing to select/select all appear. Tapping in between words to make the magnifying glass appear. Tapping at the end of the line to position the cursor there. There is tapping, tapping and holding, tapping and holding and moving. This was by far the most complex thing they had to understand. I guess if we were using a real computer (keyboard + mouse) it would have been much easier, but my take away for the day was that text management has a long way to go on tablets.
- Another thing we do easily: bring our fingers to the touchscreen in a vertical movement. Versus horizontal, in which case a larger part of the finger touches, and the iPad has no idea where the action should happen, and therefore does nothing. I had to explain how to bring your finger to the screen, it’s not obvious! The other thing is that younger peoples’ hands do not shake. Because of even very slight shake, one of the ladies had to click 5-6 times before the iPad would understand she wanted to click, not scroll. She had to tap, and tap again, and tap again. Because she could not hold her finger in place for like half a second she had a really hard time.
- A magic moment, when one of the participant, looking at a picture she just took, asks: “too bad, there are fingerprints all over my pictures [actually, on the touchscreen]. How do I remove them?” She thought that what was on the screen had made it to the picture itself. Funny, and somehow logical she would ask.
- Apple computers have become more complicated. I remember four years ago setting up a new iPhone meant answering 2-3 simple questions. Now it took me 50 minutes to walk the group through the setup of the iPads: allow location services or not, allow siri or not, connect to wifi, choose language, location, etc. We didn’t even open an apple account (too time consuming) and of course didn’t read the bloody terms of services. I understand why each step exists, I know when I did the setup myself it one took me 3 minutes, but someone with no previous experience can’t be expected to 1) do this easily 2) fully understand the implications of those decisions.
- These people (born between 1930 and 1940) are still extremely sharp, social, and interested by a lot of things. Most of them are drawn to computers because it allows them to socialize with their kids. One of the participants has a really hard time taking our teaching because all her family is in Geneva, very close to her. She therefore sees no need for computers.
The US is on a hot streak: after inventing weapons of mass destruction to go on a war that cost the country 2 trillion dollars, now comes the surveillance program that will take billions off the balance sheets of the tech giants. PRISM is the best thing that happened to the European digital industry in a long time. If you work for the marketing department of a non-american cloud company, Snowden just wrote your next ad. See for example, Jottacloud’s sales pitch:
“Jottacloud is a Norwegian company with Norwegian owners, and we operate under Norwegian privacy laws. We store all your files in Norway.
As a result, our users are protected against U.S. legislation, which arguably infringe the freedom and liberties of both U.S and non-U.S. citizens.
That makes Jottacloud great for people who want an alternative to U.S. based cloud services like SkyDrive, Dropbox and iCloud.”
I can’t say enough how school almost destroyed me, and how it seems that most of the people I’m around these days (entrepreneurs, creatives) had the same experience. I’m not sure what the solution is, I know the school system is full of passionate and good people, but there is a HUGE problem. I am starting to think it would be time to start a serious debate on this, and invite Sugata Mitra (you know, this guy) for another good discussion.
“Schools as we know them today are a product of history, not of research into how children learn. The blueprint still used for today’s schools was developed during the Protestant Reformation, when schools were created to teach children to read the Bible, to believe scripture without questioning it, and to obey authority figures without questioning them. The early founders of schools were quite clear about this in their writings. The idea that schools might be places for nurturing critical thought, creativity, self-initiative or ability to learn on one’s own — the kinds of skills most needed for success in today’s economy — was the furthest thing from their minds. To them, willfulness was sinfulness, to be drilled or beaten out of children, not encouraged.”
Three interesting data points to consider in the imitation vs innovation debate. I’m not sure how much credit these figures deserve (can the “cost of imitation” be consistent across industries? How was it measured in the first place? etc) but they certainly ask interesting questions.
• The costs of imitation are 60-75% the costs of innovation
• Imitation took nearly a hundred years during the 19th century. Between 1877-1930, the average “time to imitation” of a new product/service dropped to 23.1 years. […] In the 1950s it was 2 years. Now it seems to be 12-18 months. From 100 years down to 12-18 months. That’s some massive acceleration of diffusion.
• Pioneers who create new markets generally end up with around 7% of the markets they create. The copycats get the rest.
What’s interesting with the internet is that it’s now an old enough technology that senior citizens like me can talk about certain things that anyone below 30 will have never heard of 😉
While I was checking Vine and Instagram yesterday, it hit me that these brand new apps are actually iterations of the Seesmic concept, which started as the “twitter for video” allowing users to post short clips of themselves talking in front of their laptop’s camera.
It was the same thing, but with small differences: there was no limit on time (vine is max 6 seconds), Seesmic was leveraging laptop’s cameras (not phones), 2008 technology was not as flawless as today, there were no smartphones. In the end Seesmic never took off, with a reported 20k monthly users in 2008. In 2009 it pivoted to become a social media client, then its user base was acquired by HootSuite in 2012.
It’s interesting to see how an idea, if good, will still impose itself. It just takes time:
- Technology has to be ready: Seesmic didn’t have a mobile user base, so doing videos was limited to using a laptop’s camera. All videos looked the same, and the quality was much lower than smartphones’ lenses.
- Users have to be ready: Seesmic looked like a tool for early adopters, that demanded a good knowledge of both technology and public speaking. Nowadays everybody under 35 shoots three minutes of video every day.
- Society has to be ready: in 2008 the view on social media was that they were the realm of futile discussions, self promotion; everything but interesting stuff.
There are plenty of other factors (design, usability, etc) but it must be a little bit comforting for founder Loic Le Meur to think that his vision was right.
This reminds me of Walter Isaacson’s realization in Steve Jobs’ biography: there is “an aesthetic ﬂaw in how the universe works: the best and most innovative products don’t always win.” Sometimes the winners are those who come later and copy.