How to Make Short-Form Videos as Tutorials, and Why You Might Want To

Tags

, ,

Miffy Lamp at night
If Cindy Craig were an on-trend technology company, she would describe her work as “microlearning.” Mercifully, because she’s a librarian, she talks instead about making short-form video (<15 seconds) as a happy medium between the unwatched screencast and tutorials with static screenshots.

Craig has a splendid new essay up in In the Library with the Lead Pipe, called “Modular Short Form Video for Library Instruction”; although it’s pitched at librarians, it’s useful for anyone interested in teaching multi-step processes.

She begins with a quick review of some focus-group research and learning theory, suggesting that although screencasts aren’t very popular with students, there is good reason to suggest that audio-visual instruction is better than purely visual.

Then, technology happened: Craig started out making videos for Vine, the Twitter-adjacent video service that restricted videos to 6 seconds. Unfortunately, Vine today is best known as the late, lamented Twitter-adjacent video service, and so she had to reorganize everything. This time, she focused on SnapChat, which allows 15-second videos, and Instagram, which allows 10 (and which also has the Boomerang app that lets you cycle images quickly back and forth). You can see the results here http://ift.tt/2xh0pCF.

Craig’s list of best practices is pretty sound:

  • Carefully map out the research process from start to finish. Don’t assume users will even know how to find your library’s website.
  • Break up the research process into smaller chunks. Think about where users are likely to get stuck or confused. Your videos should help users over these hurdles.
  • If you plan to capture screens from a database, have a partner click through the screens while you hold the smartphone or tablet.
  • As you film, add simple narration to clarify what is being shown. Avoid distracting music or sound effects.
  • Use captions to make your videos more accessible and to reinforce the message.

A slightly more formal way of saying the first three is to remember to plan for storyboarding, which definitely saves a bit of time in editing.

I will say I was a little surprised that these videos are made by a camera pointing at a screen, rather than inexpensive sceencasting software, because to my (um, “middle-aged”) eye they look a little dark, especially on my laptop. Having said that, it also might make them more engaging. Again, judge for yourself, and do read Craig’s article!

Do you use short-form videos for instruction? How has it worked for you? Let us know in comments!

Photo “Miffy Lamp at Night by Flickr user Sharon VanderKaay / Creative Commons licensed BY-2.0

Return to Top

via ProfHacker http://ift.tt/2xWHwtc

Wisconsin Regents Approve a 3-Strikes Policy to Deal With Students Who Disrupt Speakers

Tags

, ,

Wisconsin Regents Approve a 3-Strikes Policy to Deal With Students Who Disrupt Speakers

The University of Wisconsin’s Board of Regents on Friday approved a policy that will compel campuses to suspend and, eventually, expel students who repeatedly disrupt controversial speakers and speech, the Milwaukee Journal Sentinel reported.

According to the board’s agenda, any student who has been found responsible twice for disrupting another person’s free speech will be suspended for a minimum of one semester. A student who has disrupted someone else’s speech three times will be expelled.

The policy also states that protests that disrupt the ability of others to listen or engage with free speech are not allowed and “shall be subject to sanction.”

State legislatures, including the Wisconsin Assembly, passed similar policies on campus speech earlier this year that were based on model legislation created by the Goldwater Institute, a libertarian and conservative public-policy think tank.

Return to Top

via The Ticker http://ift.tt/2xnHUwz

Elsevier Launching Rival To Wikipedia By Extracting Scientific Definitions Automatically From Authors’ Texts

Tags

, ,

Elsevier is at it again. It has launched a new (free) service that is likely to undermine open access alternatives by providing Wikipedia-like definitions generated automatically from texts it publishes. As an article on the Times Higher Education site explains, the aim is to stop users of the publishing giant’s ScienceDirect platform from leaving Elsevier’s walled garden and visiting sites like Wikipedia in order to look up definitions of key terms:

Elsevier is hoping to keep researchers on its platform with the launch of a free layer of content called ScienceDirect Topics, offering an initial 80,000 pages of material relating to the life sciences, biomedical sciences and neuroscience. Each offers a quick definition of a key term or topic, details of related terms and relevant excerpts from Elsevier books.

Significantly, this content is not written to order but is extracted from Elsevier’s books, in a process that Sumita Singh, managing director of Elsevier Reference Solutions, described as "completely automated, algorithmically generated and machine-learning based".

It’s typical of Elsevier’s unbridled ambition that instead of supporting a digital commons like Wikipedia, it wants to compete with it by creating its own redundant versions of the same information, which are proprietary. Even worse, it is drawing that information from books written by academics who have given Elsevier a license — perhaps unwittingly — that allows it to do that. The fact that a commercial outfit mines what are often publicly-funded texts in this way is deeply hypocritical, since Elsevier’s own policy on text and data mining forbids other companies from doing the same. It’s another example of how Elsevier uses its near-monopolistic stranglehold over academic publishing for further competitive advantage. Maybe it’s time anti-trust authorities around the world took a look at what is going on here.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Permalink | Comments | Email This Story

via Techdirt http://ift.tt/2x30vmd

You’re Taking Breaks The Wrong Way, Here’s How To Fix That

Tags

, ,

In our always-on, 100% hustle, productivity at all costs culture, it’s hard to justify taking a few minutes to yourself during the workday, let alone a full lunch hour. Even a recent Apple ad celebrated entrepreneurs working so hard, they’re not able to see their children.

But this style of working is unsustainable. We physically can’t work at 100% capacity, 100% of the time. We need breaks. But how do you do it properly? Here are seven science-backed studies that can help you maximize your downtime.

1. Take A Break Every 52 Minutes

Concentration and focus are our ultimate productivity weapons, and they need to be protected. Yet they’re constantly under attack and used up resisting the “bad” choices that surround us.

  • You resist the urge to surf the web when you’ve got work to do
  • You resist ordering a burger at lunch
  • You resist checking emails when you’re working on a project

All these moments of resistance add up. And to keep our focus and concentration strong all day long, we need to treat our willpower like the muscle it is.

For years, productivity methods like Pomodoro have suggested that working in a series of short bursts or “sprints” followed by short breaks is the best way to keep yourself on track. But just how long should these bursts be? While the Pomodoro technique advocates for shorter sprints of 25 minutes followed by a 5-minute break (with a longer break after every 4 “sessions”), research from the team at DeskTime came up with a different number.


Related: What Happened When I Gave Up Social Media For A Month 


After analyzing 5.5 million daily records of how office workers are using their computer (based on what the user self-identified as “productive” work), they found that the top 10% of productive workers all worked an average of 52 minutes before taking a 17 minute break.

Why does this work? There’s a number of reasons:

  • By knowing you have a break coming up, you’re more likely to stay focused and work with purpose.
  • Working for any longer can cause cognitive boredom.
    Your body wasn’t meant to sit for 8 hours a day.
  • You’ve probably heard that sitting is the new smoking, and while the statement makes for better headlines than fact, it’s true that getting some activity at regular intervals in your day will improve your health and mental focus.

2. Distract Yourself To Recharge Your Focus

The hardest part about taking regular breaks during your workday is that it can be incredibly hard to switch off. But research shows this intense focus actually makes us less focused in the long run.

In a study on attention in the journal Cognition, University of Illinois psychology professor Alejandro Lleras compared how our brains naturally stop registering sights, sounds, and feelings if they remain consistent for a period of time with how they react to thoughts that remain consistent for long periods of time. “If sustained attention to a sensation makes that sensation vanish from our awareness, sustained attention to a thought should also lead to that thought’s disappearance from our mind,” Lleras explains.

Instead of thinking about the problem without stop, we need to create distractions that take our attention away from the task at hand so we can come back at it with a fresh mind.

One way to do this is to overload your cognitive abilities by multitasking on your break. It might seem counterintuitive to add more cognitive strain during a break, but the key here is to force your mind off the work at hand.

3. Take In The Great Outdoors

Staying in an artificially lit, stuffy office, coworking space, or cafe all day might be a necessity for getting things done. But escaping that space for even a few minutes during the day can have huge benefits.

Studies show that just spending time in nature can help alleviate mental fatigue by relaxing and restoring the mind. Additionally, increased exposure to sunlight and fresh air helps increase productivity and can even improve your sleep. In one study, researchers found that workers with more exposure to natural light during the day slept an average of 46 minutes more per night.


Related: 10 Tricks To Immediately Make Your Day More Productive 


And what if you can’t get outdoors during your break? If you can’t go to nature, you can always bring nature to you. Simply being around natural elements can have the same effect. Just seeing plants around you can improve morale, increase satisfaction with your work, and make you more patient.

4. Give Your Mind The Right Fuel

One of the most common reasons we take a break is because our body tells us we need to. When your stomach is grumbling louder than the thoughts in your head, you have to do something about it. Unfortunately, choosing the wrong food or beverage can deplete our mental energy rather than restore it.

When you’re hungry, a hormone produced in the stomach called ghrelin signals the neurotransmitter NPY in the brain that your body’s energy levels are low and you need food.

NPY lives in the hypothalamus–the section of your brain that controls fatigue, memory, and emotion–and essentially is always making sure you have enough energy to function. When you’re hungry and your energy level dips, it takes over and reminds you to eat.

Once you do eat, your food is broken down into glucose–fuel for your brain. But unlike your car that doesn’t care as long as the tank isn’t totally empty, your brain works best with a consistent level of glucose in your blood–25 grams, according to University of Roehampton researcher Leigh Gibson.

Now, you can get that 25 grams from a snack bar, banana, or carbs like bread, rice or pasta, but a recent study from The American Journal of Clinical Nutrition found that protein not only gives you that quick hit of glucose, but is the only macronutrient to enhance cognitive abilities longer than 15-20 minutes after ingestion.

To keep your brain working at peak performance, opt for a snack on your break that includes a higher level of protein, such as a small serving of chicken, beef, or fish, nuts or nut butter, or a protein supplement. And remember to keep your portions small to reduce the risks of a post-snack crash.

5. Exercise Your Eyes

Our eyes take the burden of much of our tech-fueled lives. Most of us spend around 6-9 hours a day on a digital device with 28% locking their eyes on one type of screen or another for 10+ hours. Your eyes can begin to feel strain in as little as two hours, which is why taking a vision break during the day is so important.

Luckily, there’s a simple exercise that will help reduce your eye fatigue: 20-20-20. Every 20 minutes look away from your computer screen and focus on an item at least 20 feet away for at least 20 seconds. Easy, right?

Beyond just taking care of your eyes during your breaks, there are a few other simple steps you can take to protect your vision all day long:

  • Dim your lights: Your computer screen should be the brightest thing in the room.
  • Reduce glare: When one spot on your screen is brighter than others, your eyes have a hard time adjusting to it which can cause added strain. Try an anti-glare screen cover, clean your screen regularly, and make sure you’re not too close to a window.
  • Make your workspace more eye-friendly: Most of us have our workspace set up all wrong for our eyes. We stare down at a laptop screen or crane our necks up to look at our monitors. Proper ergonomics help reduce fatigue in your entire body but your eyes especially.

6. Hit The Gym (Or At Least Go For a Walk)

Exercise is one of the easiest ways to reduce fatigue, boost energy, and increase your productivity throughout the day. Researchers from the University of São Paulo discovered that just 10 minutes of exercise is enough to boost memory and attention performance throughout the day.

If you don’t want to change into workout clothes or risk spending the rest of the day with sweat stains, just going for a simple walk has been shown to refresh memories and increase creativity. In a report from the American Psychology Association, researchers discovered that walking increased 81% of participants’ creativity.


Related: How And When To Exercise To Boost Your Creativity


To get an added bonus, hit the block without your phone. A study from Social Psychology discovered that simply having our phone around us can increase anxiety and lower your overall cognitive performance by as much as 20%.

7. Simply Sit And Let Your Mind Wander

So far we’ve looked at a number of things you can do on your break to replenish your energy. But what about just doing nothing?

A report published in Science magazine found that simply letting our minds wander by zoning out or daydreaming has similar benefits to meditation. When we stop paying attention to anything, our brain’s Default Mode Network takes over which gives our overworked prefrontal cortex–where complex processes like problem-solving, memory, reason, and logic take place–a well-deserved rest.

Not only that, but taking some time to let your mind drift can help you come up with more novel ideas and uncover hidden answers when you’re back at work.

NYU psychology professor Scott Barry Kaufman found that daydreaming is a fantastic way for us to access our unconscious and allow ideas that have been silently incubating to bubble up into our conscious. Meaning that while you think you’re doing nothing, you’re actually mining the depths of your mind for more creative solutions to the problems you’re facing. It’s a win-win.

In our culture of doing, taking regular breaks can be seen as lazy or unproductive. But when done correctly, breaks are actually the ultimate productivity hack, because they let us do more in less time. So stop glorifying long days and burnout-inducing hours and take a break. You deserve it.


A version of this article originally appeared in Zapier and was adapted with permission. 

via Fast Company http://ift.tt/2yf1FLx

Thinking about the social cost of technology

Tags

, ,

Every time I call my mum for a chat there’s usually a point on the phone call where she’ll hesitate and then, apologizing in advance, bring up her latest technological conundrum.

An email she’s received from her email provider warning that she needs to upgrade the operating system of her device or lose access to the app. Or messages she’s sent via such and such a messaging service that were never received or only arrived days later. Or she’ll ask again how to find a particular photo she was previously sent by email, how to save it and how to download it so she can take it to a shop for printing.

Why is it that her printer suddenly now only prints text unreadably small, she once asked me. And why had the word processing package locked itself on double spacing? And could I tell her why was the cursor kept jumping around when she typed because she kept losing her place in the document?

Another time she wanted to know why video calling no longer worked after an operating system upgrade. Ever since that her concerns has always been whether she should upgrade to the latest OS at all — if that means other applications might stop working.

Yet another time she wanted to know why the video app she always used was suddenly asking her to sign into an account she didn’t think she had just to view the same content. She hadn’t had to do that before.

Other problems she’s run into aren’t even offered as questions. She’ll just say she’s forgotten the password to such and such an account and so it’s hopeless because it’s impossible to access it.

Most of the time it’s hard to remote-fix these issues because the specific wrinkle or niggle isn’t the real problem anyway. The overarching issue is the growing complexity of technology itself, and the demands this puts on people to understand an ever widening taxonomy of interconnected component parts and processes. To mesh willingly with the system and to absorb its unlovely lexicon.

And then, when things invariably go wrong, to deconstruct its unpleasant, inscrutable missives and make like an engineer and try to fix the stuff yourself.

Technologists apparently feel justified in setting up a deepening fog of user confusion as they shift the upgrade levers to move up another gear to reconfigure the ‘next reality’, while their CEOs eyes the prize of sucking up more consumer dollars.

Meanwhile, ‘users’ like my mum are left with another cryptic puzzle of unfamiliar pieces to try to slot back together and — they hope — return the tool to the state of utility it was in before everything changed on them again.

These people will increasingly feel left behind and unplugged from a society where technology is playing an ever greater day-to-day role, and also playing an ever greater, yet largely unseen role in shaping day to day society by controlling so many things we see and do. AI is the silent decision maker that really scales.

The frustration and stress caused by complex technologies that can seem unknowable — not to mention the time and mindshare that gets wasted trying to make systems work as people want them to work — doesn’t tend to get talked about in the slick presentations of tech firms with their laser pointers fixed on the future and their intent locked on winning the game of the next big thing.

All too often the fact that human lives are increasingly enmeshed with and dependent on ever more complex, and ever more inscrutable, technologies is considered a good thing. Negatives don’t generally get dwelled on. And for the most part people are expected to move along, or be moved along by the tech.

That’s the price of progress, goes the short sharp shrug. Users are expected to use the tool — and take responsibility for not being confused by the tool.

But what if the user can’t properly use the system because they don’t know how to? Are they at fault? Or is it the designers failing to properly articulate what they’ve built and pushed out at such scale? And failing to layer complexity in a way that does not alienate and exclude?

And what happens when the tool becomes so all consuming of people’s attention and so capable of pushing individual buttons it becomes a mainstream source of public opinion? And does so without showing its workings. Without making it clear it’s actually presenting a filtered, algorithmically controlled view.

There’s no newspaper style masthead or TV news captions to signify the existence of Facebook’s algorithmic editors. But increasingly people are tuning in to social media to consume news.

This signifies a major, major shift.

*

At the same time, it’s becoming increasing clear that we live in conflicted times as far as faith in modern consumer technology tools is concerned. Almost suddenly it seems that technology’s algorithmic instruments are being fingered as the source of big problems not just at-scale solutions. (And sometimes even as both problem and solution; confusion, it seems, can also beget conflict.)

Witness the excruciating expression on Facebook CEO Mark Zuckerberg’s face, for example when he livestreamed a not-really mea culpa on how the company has treated political advertising on its platform last week.

This after it was revealed Facebook’s algorithms had created categorizes for ads to be targeted at people who had indicated approval for burning Jews.

And after the US election agency had started talking about changing the rules for political ads displayed on digital platforms — to bring disclosure requirements in line with regulations on TV and print media.

It was also after an internal investigation by Facebook into political ad spending on its platform turned up more than $100,000 spent by Russian agents seeking to sew social division in the U.S.

Zuckerberg’s difficult decision (writ large on his tired visage) was that the company would be handing over to Congress the 3,000 Russian-bought ads it said it had identified as possibly playing a role in shaping public opinion during the U.S. presidential election.

But it would be resisting calls to make the socially divisive, algorithmically delivered ads public.

So enhancing the public’s understanding of what Facebook’s massive ad platform is actually serving up for targeted consumption, and the kinds of messages it is really being used to distribute, did not make it onto Zuck’s politically prioritized to-do list. Even now.

Presumably that’s because he’s seen the content and it isn’t exactly pretty.

Ditto the ‘fake news’ being freely distributed on Facebook’s content platform for years and years. And only now becoming a major political and PR problem for Facebook — which it says it’s trying to fix with yet more tech tools.

And while you might think a growing majority of people don’t have difficulty understanding consumer technologies, and therefore that tech users like my mum are a dwindling minority, it’s rather harder to argue that everyone fully understands what’s going on with what are now highly sophisticated, hugely powerful tech giants operating behind shiny facades.

It’s really not as easy to know as it should be, how and for what these mega tech platforms can be used. Not when you consider how much power they wield.

In Facebook’s case we can know, abstractly, that Zuck’s AI-powered army is ceaselessly feeding big data on billions of humans into machine learning models to turn a commercial profit by predicting what any individual might want to buy at a given moment.

Including, if you’ve been paying above average attention, by tracking people’s emotions. It’s also been shown experimenting with trying to control people’s feelings. Though the Facebook CEO prefers to talk about Facebook’s ‘mission’ being to “build a global community” and “connect the world”, rather than it being a tool for tracking and serving opinion en masse.

Yet we, the experimented on Facebook users, are not party to the full engineering detail of how the platform’s data harvesting, information triangulating and person targeting infrastructure works.

It’s usually only though external investigation that negative impacts are revealed. Such as ProPublica reporting in 2016 that Facebook’s tools could be used to include or exclude users from a given ad campaign based on their “ethnic affinity” — potentially allowing ad campaigns to breach federal laws in areas such as housing and employment which prohibit discriminatory advertising.

That external exposé led Facebook to switch off “ethnic affinity” ad targeting for certain types of ads. It had apparently failed to identified this problem with its ad targeting infrastructure itself. Apparently it’s outsourcing responsibility for policing its business decisions to investigative journalists.

The problem is the power to understand the full implications and impact of consumer technologies that are now being applied at such vast scale — across societies, civic institutions and billions of consumers — is largely withheld from the public, behind commercially tinted glass.

So it’s unsurprising that the ramifications of tech platforms enabling free access to, in Facebook’s case, peer-to-peer publishing and the targeting of entirely unverified information at any group of people and across global borders is only really starting to be unpicked in public.

Any technology tool can be a double-edged sword. But if you don’t fully understand the inner workings of the device it’s a lot harder to get a handle on possible negative consequences.

Insiders obviously can’t claim such ignorance. Even if Sheryl Sandberg’s defense of Facebook having built a tool that could be used to advertise to antisemites was that they just didn’t think of it. Sorry, but that’s just not good enough.

Your tool, your rules, your responsibility to think about and close off negative consequences. Especially when your stated ambition is to blanket rolls your platform across the entire world.

Prior to Facebook finally ‘fessing up about Russia’s divisive ad buys, Sandberg and Zuckerberg also sought to play down Facebook’s power to influence political opinion — while simultaneously operating a hugely lucrative business which near exclusively derives its revenue from telling advertisers it can influence opinion.

Only now, after a wave of public criticism in the wake of the U.S. election, Zuck tells us he regrets saying people were crazy to think his two-billion+ user platform tool could be misused.

If he wasn’t being entirely disingenuous when he said that, he really was being unforgivably stupid.

*

Other algorithmic consequences are of course available in a world where a handful of dominant tech platforms now have massive power to shape information and therefore society and public opinion. In the West, Facebook and Google are chief among them. In the U.S. Amazon also dominates in the ecommerce realm, while also increasingly pushing beyond this — especially moving in on the smart home and seeking to put its Alexa voice-AI always within earshot.

But in the meantime, while most people continue to think of using Google when they want to find something out, a change to the company’s search ranking algorithm has the ability to lift information into mass view or bury data below the fold where the majority of seekers will never find it.

This has long been known of course. But for years Google has presented its algorithms as akin to an impartial index. When really the truth of the matter is they are in indentured service to the commercial interests of its business.

We don’t get to see the algorithmic rules Google uses to order the information we find. But based on the results of those searches the company has sometimes been accused of, for example, using its dominant position in Internet search to place its own services ahead of competitors. (That’s the charge of competition regulators in Europe, for example.)

This April, Google also announced it was making changes to its search algorithm to try to reduce the politically charged problem of ‘fake news’ — apparently also being surfaced in Internet searches. (Or “blatantly misleading, low quality, offensive or downright false information”, as Google defined it.)

Offensive content has also recently threatened Alphabet’s bottom line, after advertisers pulled content from YouTube when it was shown being served next to terrorist propaganda and/or offensive hate speech. So there’s a clear commercial motivator driving Google search algorithm tweaks, alongside rising political pressure for powerful tech platforms to clean up their act.

Google now says it’s hard at work building tools to try to automatically identify extremist content. Its catalyst for action appears to have been a threat to its own revenues — much like Facebook having a change of heart when suddenly faced with lots of angry users.

Thing is, when it comes to Google demoting fake news in search results, on the one hand you might say ‘great! it’s finally taking responsibility for aiding and incentivizing the spread of misinformation online’. On the other hand you might cry foul, as self-billed “independent media” website AlterNet did this week — claiming that whatever change Google made to its algorithm has cut traffic to its site by 40 per cent since June.

I’m not going to wade into a debate about whether AlterNet publishes fake news or not. But it certainly looks like Google is doing just that.

When asked about AlterNet’s accusations that a change to its algorithm had nearly halved the site’s traffic, a Google spokesperson told us: “We are deeply committed to delivering useful and relevant search results to our users. To do this, we are constantly improving our algorithms to make our web results more authoritative. A site’s ranking on Google Search is determined using hundreds of factors to calculate a page’s relevance to a given query, including things like PageRank, the specific words that appear on websites, the freshness of content, and your region.”

So basically it’s judging AlerNet’s content as fake news. While AlterNet hits back with a claim that a “new media monopoly is hurting progressive and independent news”.

What’s clear is Google has put its algorithms in charge of assessing something as subjective as ‘information quality’ and authority — with all the associated editorial risks such complex decisions entail.

But instead of humans making case-by-case decisions, as would be the case with a traditional media operation, Google is relying on algorithms to automate and therefore eschew specific judgment calls.

The result is its tech tool is surfacing or demoting pieces of content at vast scale without accepting responsibility for these editorial judgement calls.

After hitting ‘execute’ on the new code, Google’s engineers leave the room — leaving us human users to sift through the data it pushes at us to try to decide whether what we’re being shown looks fair or accurate or reasonable or not.

Once again we are left with the responsibility of dealing with the fallout from decisions automated at scale.

But expecting people to evaluate the inner workings of complex algorithms without letting them also see inside those black box — and while also subjecting them to the decisions and outcomes of those same algorithms — doesn’t seem a very sustainable situation.

Not when the tech platforms have got so big they’re at risk of monopolizing mainstream attention.

Something has to give. And just taking it on faith that algorithms applied at massive scale will have a benign impact or that rules underpinning vast information hierarchies should never be interrogated is about as sane as expecting every person, young or old, to be able to understand exactly how your app works in perfect detail, and to weigh up whether they really need your latest update, while also assuming they’ll manage to troubleshoot all the problems when your tool fails to play nice with all the rest of the tech.

We are just starting to realize the extent of what can get broken when the creators of tech tools evade wider social responsibilities in favor of driving purely for commercial gain.

More isn’t better for everyone. It may be better for an individual business but at what wider societal cost?

So perhaps we should have paid more attention to the people who have always said they don’t understand what this new tech thing is for, or questioned why they really need it, and whether they should be agreeing to what it’s telling them to do.

Maybe we should all have been asking a lot more questions about what the technology is for.

via TechCrunch http://ift.tt/2fZqWPo