Basically, Impression share = impressions / total eligible impressions. It’s based on a lot of factors which we’ll dive into shortly. Impression share is available at campaign, keyword and ad group levels. A 50% impression share for example means that out of all bid auctions that your ad could have been shown in, you showed for half. So you lost 50% of the traffic. Generally this can be a budget issue but there are other factors at play here.
What is a good impression share?
A good impression share you might think is 95%-100%. This is true in most cases unless you’re in a high competition niche or industry. For example, if the competition is high and you have 100% impression share, your budget will be exhausted VERY quickly.
If your impression share is 60%, your ad will only be seen 60% of the time, and in turn get less clicks. But if the competition is high, you’ll still have enough leads or sales.
So what exactly affects Impression share?
Quality score of ads determines ad position & impression share.
The more ‘matched’ the ad copy is to the keyword, the higher the position and impressions.
Keyword match types.
Broad match gets more impressions and wastes money.
Phrase match gets more impressions than exact match, but wastes less money than broad.
Exact match gets less impressions but only shows for that exact phrase.
Negative Keywords increase impression share for what you want to show for.
Ads displayed for fewer search queries. Gaining a higher impression share by only showing for qualified users.
Budget & Keyword Bid.
The more money that’s available to spend on keywords, the more the keyword will show.
The less locations targeted compared to a competitor, strengthens your impressions in that area. E.g. if someone is targeting all of Ireland, Dublin will take a lot of their budget. meaning less budget & impressions for the rest of the country.
The more your ad is displayed during the day, the more impressions you get.
Increases or decreases in budget during the day will increase or decrease impressions.
Target CPA needs at least 30 conversions before it will start to learn how to effectively use budget.
Target Impression Share can be bid for in three ways. You set a goal Impression Share Percentage.
Absolute top of page.
Top of page.
Anywhere on the page.
You can also set a max CPC bid to help not overspending.
Google also has some useful information for your reading here
I hope that’s covered what affects impression share, if you have any questions or more insightful information regarding us, don’t forget to leave us a comment below! Maybe you might also be interested in how AI can help journalism – check that out over here.
A substantial part of humanity is slowly emerging from weeks of lockdown. What we have experienced is truly rare: a real global threat, menacing to all wherever we lived. But how did humanity respond to this pandemic? Did people consistently stay at home as most governments asked them to? And if they didn’t, where did they go?
We can answer these questions thanks to Google. It has released data on people’s movements gathered from millions of mobile devices that use its software (Android, Google Maps and so on). Never before has this level of detail been available. For infamous pandemics in history even basic facts are disputed (for example the number of deaths from the Black Death). The Google dataset seems to be of such quality that several scientific questions can finally be resolved.
Across Europe, the picture the data paints is varied. Some of the difference can be attributed to the lockdown strategies of different countries. But some, seemingly, cannot. This may be useful when considering future lockdowns.
How the data reveals behaviour
Google first divided where people spent their time into six location categories: homes; workplaces; parks; public transport stations; grocery shops and pharmacies; and retail and recreational locations.
It then released aggregated data on time spent at each of the six location types for the past several months, compared to a baseline: the five-week period between January 3 and February 6 2020. To the extent that no special events happened during this time, the change from the baseline after this reflects people’s collective response to the pandemic and the lockdowns.
Using the Google data, we then created the following graphs, comparing the UK, France, Spain, Italy, Germany, Denmark, Sweden and Greece between mid-February and early May. To get a smoother image, we calculated a seven-day moving average. Countries are also ranked and coloured in the graph legends according to their average reaction over the whole period (meaning a country’s colour can differ between graphs).
What were the differences between countries?
Let’s start with people staying at home.
For a good part of April, all these countries except for Sweden were officially in some form of lockdown, with measures in place banning non-essential movement. However, behaviour varied substantially.
In Spain, Italy and France, time spent at home rose early in the pandemic by 30-35%. Even the most outdoorsy people must have stayed home for at least 10-12 hours before the lockdown, so this means at least three-to-five extra hours were spent at home per person – for most even more. This reflects these countries’ strict lockdowns: they banned all events, limited outdoor exercise, and in France’s case, required documentation to go outside.
Germany and Denmark were more relaxed; the rise in staying home was about 15%, reflecting their partial lockdowns. Sweden’s increase was even lower at 8-10%.
The UK was somewhere in between, reacting late but then strongly, with a rise of about 20-25%. The delay reflects its lockdown beginning later – on March 23 – though it is interesting that some people were already staying home before its lockdown began.
Greece is an interesting case, as it reacted relatively early and strongly, but started relaxing in late March, with a strong effect by mid-April, long before its non-essential movement measures were lifted on May 3. This might indicate that compliance is a matter of perceived risk. Greece kept its COVID-19 cases and deaths remarkably low, which may have caused people to relax.
The mix of how people spent their outdoor time also differed. For example, the next graph presents the park visit data.
For most of April, Sweden, Denmark and Germany saw a rise in the time people spent in parks (including national and local parks, public gardens and beaches). At the same time, Italy and Spain saw 80% drops. Greece and the UK are again somewhere in the middle, seeing a drop initially but coming back to the benchmark in early May. In Greece’s case we actually see a rise of almost 50% lately compared to the benchmark – again suggesting that fatigue may have set in, in combination with good weather and a lack of perceived risk.
Germany’s park visit data is further evidence that lockdown measures do not fully determine behaviour, and that people have their own motives. Its graph line is somewhat similar to Denmark’s and Sweden’s, countries with less strict official policies; the country with the most similar policy on going outdoors was the UK, whose line shows a decrease instead of an increase.
Lastly, let us look at time spent at workplaces. Again, in some countries people were going to work almost as much as before, while in others there were drops of 70-80%. Spain and Italy banned all non-essential work – a measure that went beyond the restrictions of all other countries – so it is not a surprise to see these at the bottom of the graph. We can see, though, the effect of Spain allowing some sectors to open up again on April 14.
What should we do with this data?
Behavioural fatigue, a much-maligned term during the UK government’s handling of the crisis, is now an issue that can be discussed properly. While lockdown measures were still in place, people around Europe started leaving their homes more. It’s clear that adherence to lockdown decreased over time.
Governments now need to investigate whether this affected the spread of disease. Is staying at home a solution? And if people do not stay at home, does it matter where they go? Answering these questions might allow governments to design an optimal lockdown policy mix that, say, allows people to go to parks, but not mingle in shops and railway stations.
As the threat of the virus is not eliminated, and second waves are expected around the world, gaining these answers will be very important.
When I see red, it’s the most religious experience. Seeing red just results from photons of a certain frequency hitting the retina of my eye, which cascades electrical and biochemical pulses through my brain, in the same way a PC runs. But nothing happening in my eye or brain actually is the red colour I experience, nor are the photons or pulses. This is seemingly outside this world. Some say my brain is just fooling me, but I don’t accept that as I actually experience the red. But then, how can something out of this world be in our world? Andrew Kaye, 52, London.
What’s going on in your head right now? Presumably you’re having a visual experience of these words in front of you. Maybe you can hear the sound of traffic in the distance or a baby crying in the flat next door. Perhaps you’re feeling a bit tired and distracted, struggling to focus on the words on the page. Or maybe you’re feeling elated at the prospect of an enlightening read. Take a moment to attend to what it’s like to be you right now. This is what’s going on inside your head.
Or is it? There’s another, quite different story. According to neuroscience, the contents of your head are comprised of 86 billion neurons, each one linked to 10,000 others, yielding trillions of connections.
A neuron communicates with its neighbour by converting an electrical signal into a chemical signal (a neurotransmitter), which then passes across the gap in between the neurons (a synapse) to bind to a receptor in the neighbouring neuron, before being converted back into an electrical signal. From these basic building blocks, huge networks of electro-chemical communication are built up.
This article is part of Life’s Big Questions The Conversation’s new series, co-published with BBC Future, seeks to answer our readers’ nagging questions about life, love, death and the universe. We work with professional researchers who have dedicated their lives to uncovering new perspectives on the questions that shape our lives.
These two stories of what’s going on inside your head seem very different. How can they both be true at the same time? How do we reconcile what we know about ourselves from the inside with what science tells us about our body and brain from the outside? This is what philosophers have traditionally called the mind-body problem. And there are solutions to it that don’t require you to accept that there are separate worlds.
Ghost in the machine?
Probably the most popular solution to the mind-body problem historically is dualism: the belief that the human mind is non-physical, outside of the physical workings of the body and the brain. According to this view, your feelings and experiences aren’t strictly speaking in your head at all – rather they exist inside an immaterial soul, distinct from, although closely connected to, your brain.
The relationship between you and your body, according to dualism, is a little bit like the relationship between a drone pilot and his drone. You control your body, and receive information from its sensors, but you and your body are not the same thing.
Dualism allows for the possibility of life after death: we know the body and the brain decay, but perhaps the soul lives on when the body dies, just as a drone pilot lives on if his drone is shot down. It is also perhaps the most natural way for human beings to think about the body-mind relationship. The psychologist Paul Bloom has argued that dualism is hardwired into us, and that from a very early age infants start to distinguish “mental things” from “physical things”. Reflecting this, most cultures and religions throughout history seem to have adopted some kind of dualism.
The trouble is that dualism does not fit well with the findings of modern science. Although dualists think the mind and the brain are distinct, they believe there is an intimate causal relationship between the two. If the soul makes a decision to raise an arm, this somehow manages to influence the brain and thereby set off a causal chain which will result in the arm going up.
Rene Descartes, the most famous dualist in history, hypothesised that the soul communicated with the brain through the pineal gland, a small, pea-shaped gland located near the centre of the brain. But modern neuroscience has cast doubt on the idea that there is a single, special location in the brain where the mind interacts with the brain.
Perhaps a dualist could maintain that the soul operates at several places in the brain. Still, you’d think we’d be able to observe these incoming signals arriving in the brain from the immaterial soul, just as we can observe in a drone where the radio signals sent by the pilot arrive. Unfortunately, this is not what we find. Rather, scientific investigation seems to show that everything that happens in a brain has a physical cause within the brain itself.
Imagine we found what we thought was a drone, but upon subsequent examination we discovered that everything the drone did was caused by processes within it. We would conclude that this was not being controlled by some external “puppeteer” but by the physical processes within it. In other words, we would have discovered not a drone but a robot. Many philosophers and scientists are inclined to draw the same conclusions about the human brain.
Am I my brain?
Among contemporary scientists and philosophers, the most popular solution to the mind-body problem is probably materialism. Materialists aspire to explain feelings and experiences in terms of the chemistry of the brain. It is broadly agreed that nobody has the slightest clue as yet how to do it, but many are confident that we one day will.
This confidence probably arises from the sense that materialism is the scientifically kosher option. The success of science in the past 500 years is after all mind-blowing. This gives people confidence that we just need to plug away with our standard methods of investigating the brain, and one day we’ll solve the riddle.
Galileo was the first person to demand that science should be mathematical. But Galileo understood quite well that human experience cannot be captured in these terms. That’s because human experience involves qualities – the redness of a red experience, the euphoria of love – and these kinds of qualities cannot be captured in the purely quantitative language of mathematics.
Galileo got around this problem by adopting a form of dualism, according to which the qualities of consciousness existed only in the incorporeal “animation” of the body, rather than in the basic matter that is the proper focus of physical science. Only once Galileo had located consciousness outside of the realm of science, was mathematical science possible.
In other words, our current scientific approach is premised on Galileo’s separation of the quantitative physical world from the qualitative reality of consciousness. If we now want to bring consciousness into our scientific story, we need to bring these two domains back together.
Is consciousness fundamental?
Materialists try to reduce consciousness to matter. We have explored some problems with that approach. What about doing it the other way around – can matter be reduced to consciousness? This brings us to the third option: idealism. Idealists believe that consciousness is all that exists at the fundamental level of reality. Historically, many forms of idealism held that the physical world is some kind of illusion, or a construction generated from our own minds.
Idealism is not without its problems either. Materialists put matter at the basis of everything, and then have a challenge understanding where consciousness comes from. Idealists put consciousness at the basis of everything, but then have a challenge explaining where matter comes from.
But a new – or rather rediscovered – way of building matter from consciousness has recently been garnering a great deal of attention among scientists and philosophers. The approach starts from the observation that physical science is confined to telling us about the behaviour of matter and what it does. Physics, for example, is basically just a mathematical tool for telling us how particles and fields interact. It tells us what matter does, not what it is.
If physics doesn’t tell us what fields and particles are, then this opens up the possibility that they might be forms of consciousness. This approach, known as panpsychism, allows us to hold that both physical matter and consciousness are fundamental. This is because, according to panpsychism, particles and fields simply are forms of consciousness.
At the level of basic physics, we find very simple forms of consciousness. Perhaps quarks, fundamental particles that help make up the atomic nucleus, have some degree of consciousness. These very simple forms of consciousness could then combine to form very complex forms of consciousness, including the consciousness enjoyed by humans and other animals.
So, according to panpsychism, your experience of red and the corresponding brain process don’t take place in separate worlds. Whereas Galileo separated out the qualitative reality of a red experience from the quantitative brain process, panpsychism offers us a way of bringing them together in a single, unified worldview. There is only one world, and it’s made of consciousness. Matter is what consciousness does.
Panpsychism is quite a radical rethink of our picture of the universe. But it does seem to achieve what other solutions cannot. It offers us a way to combine what we know about ourselves from the inside and what science tells us about our bodies and the brains from the outside, a way of understanding matter and consciousness as two sides of the same coin.
Can panpsychism be tested? In a sense it can, because all of the other options fail to account for important data. Dualism fails to account for the data of neuroscience. And materialism fails to account for the reality of consciousness itself. As Sherlock Holmes famously said: “Once we have ruled out the impossible, what remains, no matter how improbable, must be the truth.” Given the deep problems that plague both dualism and materialism, panpsychism looks to me to be the best solution to the mind-body problem.
Even if we can solve the mind-body problem, this can never dispel the wonder of human consciousness. On such matters, the philosopher is no match for the poet.
We don’t know how long it will take to find a vaccine for COVID-19, but we do know this: if and when we find one, there will be unprecedented demand for the molecules that go into it.
Several different types of vaccine are currently being researched. These include those that use inactivated forms of the virus itself and molecules that look like the virus. The body recognises these molecules when they are injected and produces proteins called antibodies that protect us from threats like viruses. It may also be possible to treat COVID-19 patients with antibodies directly.
All of these approaches will require us to mass-produce active molecules, and quickly. But how do we do that? The question predates our current pandemic.
Last year, the search for an answer took us to the tobacco fields of Spain and Italy because, as strange as it sounds, the tobacco plant might provide a novel way to meet this huge demand.
Big farmer meets big pharma
Today, the basic components of vaccines are produced using mammal, bacteria and yeast cell cultures in containers called bioreactors. These basic components are produced in controlled environments to strict specifications.
For a number of years, however, researchers have demonstrated that plants can act as bioreactors just like cell cultures. Plants have been a rich source of pharmacologically important compounds throughout history, but it has only recently become possible – thanks to biotechnology – to modify plants to grow important compounds in a targeted way. This is known as “pharming”.
Not only might this be a cheaper way to produce in-demand molecules, but, potentially, vastly more scaleable.
If plants can be harnessed for this purpose, it could lead to new industries and alternatives for pharmaceutical companies. Lower and middle income countries could particularly benefit from this low-tech option, because cell culture alternatives require greater upfront investment. To this end, dedicated pharming facilities have recently opened in Brazil and South Africa.
Pharming for molecules is not restricted to medical applications, either. It’s also possible to grow nutritional, cosmetic and industrial molecules in plants.
The lab mouse of the plant world
It may seem counter-intuitive that the answer to a global pandemic could be produced in the leaves of one of the world’s most deadly plants. But there are good reasons why the tobacco plant, Nicotiana tabacum, and its relative N. benthamiana are common plants for pharming.
Both are easily modified and together they have become known as the lab mice of the plant science world, in part due to tobacco’s economic importance.
Tobacco has all the properties we need when selecting a pharming platform: it is quick-growing, leafy and there are people familiar with growing it all over the world. Several laboratories have already seen success in using it to grow antibodies for the treatment of HIV and the Ebola virus.
So it’s perhaps no surprise that British American Tobacco recently announced its ambition to produce between one to three million doses of a potential coronavirus vaccine using tobacco.
These efforts rely on contained, indoor production. But to produce at scale, we would need to pharm outdoors. That’s why we visited Spain and Italy – two of Europe’s largest producers of tobacco – last year, in order to speak with farmers and their cooperatives to see if they would be interested in becoming pharmers. The response, which will be published in a forthcoming research paper, was largely positive. Tobacco farmers saw this as an opportunity to increase profit in a shrinking European market and de-stigmatise a crop they want to keep growing.
Don’t bet the pharm yet
Pharming is not without its problems, some of which go beyond the technical.
It has been a long road since the first plant was used a vehicle for pharming, partially because of the need to demonstrate that plant-derived molecules are as safe and reliable as those that come from cell cultures, which we understand far better and are already the preferred platform for pharmaceutical companies.
But it is also because pharming requires genetic modification, a famously controversial issue with the public. (Concern over genetic modification does not appear to extend to cell culture technologies, which also often rely on modified microorganisms.)
European legislation is a huge barrier. This means pharming is currently confined to heavily controlled spaces such as laboratories and has limited one of pharming’s greatest assets: the fact that it could be done at large scale in open fields.
The strict rules around pharmaceutical production also pose a big challenge for outdoor pharming, despite the fact that at least one US-based company has demonstrated that it is possible to produce therapeutic molecules in the field.
Combining biotechnology with a crop surrounded by considerable controversy for understandable reasons could prove equally challenging, especially if Big Tobacco companies are involved.
But the potential is there for us to produce vaccines and therapeutics safely and at scale, using the tobacco plant for good instead of harming people’s health. And as COVID-19 sweeps the globe, there’s never been more of a need to do so.
The astronauts will take off lying on their backs in the seats, and facing in the direction of travel to reduce the stress of high acceleration on their bodies. Once launched from Kennedy Space Centre, the spacecraft will travel out over the Atlantic, turning to travel in a direction that matches the ISS orbit.
With the first rocket section separating at just over two minutes, the main dragon capsule is then likely to separate from the second stage burn roughly an hour later and continue on its journey. All being well, the Dragon spacecraft will rendezvous with the ISS at 15:40 (GMT) on May 28.
Space mission launches and landings are the most critical parts. However, Space X has conducted many tests, including 27 drops of the parachute landing system. It has also managed an emergency separation of the Dragon capsule from the rocket. In the event of a failed rocket launch, eight engines would lift the capsule containing the astronauts up into the air and away from the rocket, with parachutes eventually helping it to land. The Falcon 9 rocket has made 83 successful launches.
Docking and return
The space station has an orbital velocity of 7.7km per second. The Earth’s rotation carries launch sites under a straight flight path of the ISS, with each instance providing a “launch window”.
To intercept the ISS, the capsule must match the station’s speed, altitude and inclination, and it must do it at the correct time such that the two spacecraft find themselves in close proximity to each other. The difference in velocity between the ISS and the Dragon capsule must then be near to zero at the point where the orbits of the two spacecraft intersect.
Once these conditions are met, the Dragon capsule must manoeuvre to the ISS docking port, using a series of small control thrusters arranged around the spacecraft. This is due to be done automatically by a computer, however the astronauts can control this manoeuvre manually if needed.
As you can see in the figure below, manoeuvring involves “translation control” as indicated by green arrows – moving left/right, up/down, forward/back. The yellow arrows show “attitude control” – rolling clockwise/anti-clockwise, pitching up/down, and yawing left/right.
This is complicated by Newton’s first law of motion – that any object at rest or in motion will continue to be so unless acted upon by an external force. That means any manoeuvre, such as a roll to the right, will continue indefinitely in the absence of air resistance to provide an external force until it is counteracted by firing thrusters in the opposite direction.
So now that you have a grasp of orbital manoeuvring, why not have a go yourself? This simulator, provided by Space X, allows you to try and pilot the Dragon capsule to the ISS docking port.
The astronauts will return to Earth when a new set are ready to take their place, or at NASA’s discretion. NASA are already planning the first fully operational flight of crew Dragon, with four astronauts, although a launch date for that has not yet been announced and will undoubtedly depend on the outcome of this demonstration flight.
New era for spaceflight
The launch puts SpaceX firmly ahead of the other commercial ventures looking at providing crewed space launches. This includes both Boeing’s Starliner, which first launched last year but was uncrewed, and Sierra Nevada’s Dream Chaser which is planned to be tested with cargo during a trip to the ISS next year.
The ability of the commercial sector to send astronauts to the ISS is an important step toward further human exploration, including establishing a human presence at the Moon, and ultimately, Mars.
With companies competing, however, an open question remains whether safety could at some point be compromised to gain a commercial edge. There is no suggestion this has happened so far, but any crewed mission which failed due to a fault stemming from economic concerns would have serious legal ramifications.
With more nations and companies developing plans for lunar missions, there are obvious advantages in international cooperation and finding cost efficient launch methods. This is not least because it’s not as dependent on the whim of elected governments for direction, which can change completely from one administration to the next.
So for us scientists looking to expand our knowledge of space, it is a very exciting moment.
When my kids, ages 11 and 8, bang through the back door after school, often the first thing out of their mouths is: “Mom! Can we play Prodigy?”
After a quick mental calculation of how much screen time they’ve already had for the week and how much peace and quiet I need to finish my work, I acquiesce. After all, Prodigy is a role-playing video game that encourages kids to practice math facts. It’s educational.
Though video games are increasingly making their way into classrooms, scientists who study them say the data are lacking on whether they can actually improve learning — and most agree that teachers still outperform games in all but a few circumstances.
But there is growing evidence that some types of video games may improve brain performance on a narrow set of tasks. This is potentially good news for students, as well as for the millions of people who love to play, or at least can’t seem to stop playing (see infographic).
“There is a lot of evidence that people — and not just young people — spend a lot of time playing games on their screens,” says Richard Mayer, an education psychology researcher at the University of California, Santa Barbara. “If we could turn that into something more productive, that would be a worthwhile thing to do.”
In an article in the 2019 Annual Review of Psychology, Mayer set out to evaluate rigorous experiments that tested what people can learn from games. Though he’s not entirely convinced of games’ educational potential, some studies did suggest that games can be effective in teaching a second language, math and science. The hope, he says, is to figure out how to harness any brain-boosting potential for better classroom results.
Your brain on games
Some of the first evidence that gaming may train the brain came from first-person shooter games. That these oft-maligned games might actually have benefits was first stumbled upon by an undergraduate studying psychology at the University of Rochester in New York. C. Shawn Green gave his friends a test of visual attention, and their scores were off the charts. He and his research supervisor, Daphné Bavelier, thought there must have been a bug in his coding of the test. But when Bavelier took the test, she scored in the normal range.
The difference was that Green’s friends had all been devoting more than 10 hours per week to Team Fortress Classic, a first-person shooter version of capture the flag. Green and Bavelier then rigorously retested the idea with people who were new to gaming. They had two groups train on different types of games: One group practiced a first-person shooter action game for one hour per day for 10 days, and the other spent the same amount of time on Tetris, a spatial puzzle game.
Bavelier, now a cognitive neuroscientist at the University of Geneva in Switzerland, says that action gamers are better able to switch their visual attention between distributed attention (scanning a large area for a particular object) and focused attention (extracting specific facts from a video). “This is called attentional control, the ability to flexibly switch attention as time demands,” she says.
Though it’s not yet clear if improving this kind of attention can help kids in the classroom, Bavelier says, she does see the potential for games to help motivate students — adding a bit of “chocolate” to the learning mix.
Green, now a cognitive psychologist at the University of Wisconsin–Madison, admits that the benefits of playing hours upon hours of Call of Duty may be limited in real life. “There are some people who have jobs with a need for enhanced visual attention,” he says, “such as surgeons, law enforcement or the military.” But, he notes, all games come with an opportunity cost. “If video game time displaces homework time, that can affect reading and math skills negatively.”
In other studies, researchers found that gamers who trained on Tetris were better at mentally rotating two-dimensional shapes than those who played a control game. Students who played two hours of All You Can E.T., an educational game designed to enhance the executive function of switching between tasks, improved their focus-shifting skills compared with students who played a word search game. Not surprisingly, the cognitive skills that games can improve are the ones that players end up practicing over and over during the course of play.
But importantly, these skill improvements are very specific to the task at hand: First-person shooter games don’t improve mental rotation of objects, and Tetris doesn’t improve visual attention. And ironically, in assessing studies for his review, Mayer found no convincing evidence that so-called brain-training games for healthy adults such as the Lumosity suite of games succeed at improving memory, attention or spatial cognition.
The next step is to figure out how these findings may translate to the classroom, where video games are already making in-roads. Many students could benefit from an improvement in the ability to flexibly shift their attention when needed. And though first-person shooter games are not really appropriate for grade-school students, Bavelier says researchers are getting better at identifying the core features of video games that drive improved brain agility.
“It could be a game based on a doctor who has to choose the right medicine to save the world. It doesn’t have to be linked to death, violence and zombies,” she says.
“Making a video game that is compelling and effective is difficult,” Green says. Not to mention that games designed purely for entertainment can cost as much as making a blockbuster movie. What might be more useful for classrooms, he says, is to design kid-appropriate games aimed at improving specific brain skills that will help students succeed throughout the school day.
Gaming for gains
At New York University’s Games for Learning Institute, codirector Jan Plass’ team is designing shooter-type, educational video games that boost cognitive skills in executive function without the violence.
In All You Can E.T., players shoot food or drink into aliens’ mouths based on a set of rules that keeps changing, forcing their brains to shift between tasks. And Gwakkamolé is a “Whac-A-Mole”-style game designed to help players improve their inhibitory control by whacking only avocados that aren’t wearing helmets.
“Both of these games make students practice really important executive function skills that some kids didn’t fully develop in early childhood,” Plass says. “Switching tasks and inhibitory control are really important for learning.”
Inhibitory control keeps kids in their seats, helps them focus on a lesson and prevents outbursts that distract the entire class. Practicing this task while playing a fun computer game has an appeal that other approaches do not. “It’s clear that games re-engage kids” who have turned off or tuned out, he says.
But Bavelier questions whether cognitive skills gained from gaming will transfer to other, real-world or classroom situations. “Sure people who play Gwakkamolé get better at inhibition, in that game,” she says. “But it’s a much taller order to show that that skill transfers to better inhibition in general.”
The best classroom video games have certain characteristics, say Mayer and Plass. They focus on one specific cognitive skill and compel players to practice that skill with embedded feedback and responsiveness. The game must be adaptive, meaning the level of challenge increases as the player improves. This is key for classrooms where teachers need one game that will work well for both struggling and advanced students.
Game designers want to hook students on educational games in the same way 270 million of us are driven to play Candy Crusheach day. “Games’ most salient feature is their motivational power,” Mayer says. “We want to harness that.”
To do that, Mayer says, brain scientists, education researchers and game designers must engage more deeply with each other to create compelling games that can sharpen cognitive skills while they entertain. Bavelier points to the power of kids’ brains to do things like memorize hundreds of Pokémon characters and their special powers. Imagine if they applied that obsession to learning all the stars in the night sky, Bavelier says.
Although this research is still evolving, I’m reassured about my own kids’ ever-increasing requests for screen time — especially when they beg to play games designed to help them master math.
“Our dream is exactly this,” Plass says. “That kids will be dying to get to their homework.”
Kendall Powell writes about science from her home office in Lafayette, Colorado. Her two school-age gamers like Prodigy and Minecraft best. Follow her @KendallSciWrite
This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. Sign up for the newsletter.
When a mother gives birth to twins, the offspring are not always identical or even the same gender. Known as fraternal twins, they represent a longstanding evolutionary puzzle.
Identical twins arise from a single fertilised egg that accidentally splits in two, but fraternal twins arise when two eggs are released and fertilised. Why this would happen was the puzzle.
In research published today in Nature Ecology & Evolution we used computer simulations and modelling to try to explain why natural selection favours releasing two eggs, despite the low survival of twins and the risks of twin births for mothers.
Since Michael Bulmer’s landmark 1970 book on the biology of twinning in humans, biologists have questioned whether double ovulation was favoured by natural selection or, like identical twins, was the result of an accident.
At first glance, this seems unlikely. The embryo splitting that produces identical twins is not heritable and the incidence of identical twinning does not vary with other aspects of human biology. It seems accidental in every sense of the word.
Those do not sound like the characteristics of something accidental.
The twin disadvantage
In human populations without access to medical care there seems little benefit to having twins. Twins are more likely to die in childhood than single births. Mothers of twins also have an increased risk of dying in childbirth.
In common with other great apes, women seem to be built to give birth to one child at a time. So if twinning is costly, why has evolution not removed it?
Paradoxically, in high-fertility populations, the mothers of twins often have more offspring by the end of their lives than other mothers. This suggests having twins might have an evolutionary benefit, at least for mothers.
This allowed us to do something otherwise impossible: control in the simulations and modelling whether women ovulated one or two eggs during their cycles. We also modelled different strategies, where we switched women from ovulating one egg to ovulating two at different ages.
We could then compare the number of surviving children for women with different patterns of ovulation.
Women who switched from single to double ovulation in their mid-20s had the most children survive in our models – more than those who always released a single egg, or always released two eggs.
This suggests natural selection favours an unconscious switch from single to double ovulation with increasing age.
A strategy for prolonging fertility
The reason a switch is beneficial is fetal survival – the chance that a fertilised egg will result in a liveborn child – decreases rapidly as women age
So switching to releasing two eggs increases the chance at least one will result in a successful birth.
But what about twinning? Is it a side effect of selection favouring fertility in older women? To answer this question, we ran the simulations again, except now when women double ovulated the simulation removed one offspring before birth.
In these simulations, women who double ovulated throughout their lives, but never gave birth to twins, had more children survive than those who did have twins and switched from single to double ovulating.
This suggests the ideal strategy would be to always double ovulate but never produce twins, so fraternal twins are an accidental side effect of a beneficial strategy of double ovulating.
Yet amid these worrying developments there are positive elements to be found. Many of those who have kept their jobs have found they can keep working without the need for a daily commute. Recent research suggests that up to half of UK workers can do their jobs remotely.
And it’s not just office workers. Teachers, GPs, politicians and judges have all swiftly adapted to professional isolation. In just a few weeks, the traditional workplace has been transformed.
Suddenly fears that technology will destroy job have given way to relief that it can help save them. (Although the prospect of robot doctors treating patients, drones transporting vaccines and 3D printers producing face masks does not seem like a bad idea all of a sudden.)
Of course, working from home (WFH) requires considerable levels of adjustment. But data from our ongoing research shows that, on the whole, people seem seem quite positive about this aspect of their restricted lives.
In the middle of March 2020, we collected tweets using the hashtags #Coronavirus and #COVID-19 to observe how people were reacting on social networks to the pandemic. After processing 60 million tweets and removing the retweets, we focused on 6,500 messages from March 14 to April 6 that contained the hastag #WFH.
The idea was to assess how people were feeling about working from home. Overall, we found that 70.6% of the tweets reflected a positive sentiment towards the idea. The tweets that came from UK users after the lockdown on March 23 saw a rise in the positive feedback sentiment to 78.6%.
We used something called “sentiment analysis” to assess the tweets. For our purposes this was a lexicon-based approach where every tweet is represented as a group of words, which are each scored on a scale from negative to positive. A mathematical algorithm is then applied in order to a make a final assessment of the tweet’s overall sentiment.
We were also curious about the topics people were talking about. One of the more popular methods to extract themes from text is called “topic modelling”, which is essentially a way of processing large amounts of data – in this case tweets – to find out what words and phrases are being used the most.
Words such as “respect”, “inspire” and “proactive” appeared between 1,000 and 3,000 times in the #WFH tweets, indicating a positive response to the concept of working from home over the course of the pandemic.
Generating a word-cloud to observe the frequency of the words appearing in the tweets during this period, we also found the overall sentiment of the social media response to #WFH to be positive. There is a clear sense of productivity, with words such as “team”, “tips”, “satisfaction”, “service”, “remote”, “support” and “good” among the most prominent.
To build a deeper understanding of these positive feelings, we then mapped the sentiment generated from the tweets per day before the lockdown in the UK.
As the effects of coronavirus and lockdown intensified, so too did the mentions of working from home on Twitter. But there were dips too, most noticeably at the end of March 27, where there was a steep drop in #wfh references of nearly 50% which lasted for three days. We believe this could align with media reports highlighting concerns about children’s wellbeing during lockdown and widely expressed worries about the security of online meeting software, which were also expressed in some of the tweets.
There were negative experiences recorded too. For workers with children to look after, the changed dynamic of domestic life created new and widespread challenges. Yet this also inspired moments of gratitude and offers of help. The tweets we looked at showed evidence of small online communities forming, with people very happy to share #WFH tips and ideas.
Of course, working from home is not a new concept. But coronavirus has, in a very short space of time, forced it to become a normality for much wider sectors of the workforce. And overall, our research shows that the response to this has been positive.
This raises a new quandary about what will happen after the lockdown is lifted. Will businesses start to widen the practice to allow more flexibility to their employees? And if they don’t, how will employees feel about a return to the “old” ways of doing things? No doubt the response on social media will give us some clues.
The 2,000-year-old scrolls were found in the Qumran caves next to the Dead Sea in the 1940s. They weren’t the only discoveries, though. Archaeological artefacts like pottery and linen were also taken from the caves, and some are now in collections around the world.
Dennis had written his Oxford thesis on Qumran archaeology and was preparing it for a book. Marcello was co-editing the final official publications of the cave findings. I had been working on Qumran materials in private and little-known collections.
We were all interested in non-scroll materials from the caves, which had been gifted and sold by the excavators to collectors, and early photographic dossiers that recorded these discoveries. In our project, scrolls were to be considered as archaeological artefacts among the other items found in the caves.
Blank scroll fragments – or not?
In the list of the many archives and collections we planned to visit, we noted: “Further materials exist in the Reed archive of the John Rylands Library, Manchester.” We knew that this collection contained some blank fragments of scroll – the archive had belonged to Ronald Reed, former leather expert at the University of Leeds, who had been gifted blank scroll fragments after the official excavations of the Qumran caves.
The fragments had been discussed and analysed by previous researchers, and we thought they might be ideal for radiocarbon dating. In our research budget, we had £10,500 for radiocarbon and other scientific analyses for all our designated objects of interest, and £3,000 for digital photography and permissions for publications. We had nothing in the pot for multispectral imaging to reveal texts. This just goes to show how hard it is to design a budget for truly investigative research.
I visited the John Rylands Library at the University of Manchester with our researcher Sandra Jacobs. Curator Elizabeth Gow brought us the boxes of the Reed collection. We went through them to identify interesting items. When we got to Box 6, it appeared to contain a lot of random leather samples. But as she took them out one by one, Elizabeth held up a small envelope labelled “4Q22”. I said, “Hang on, this is one of the scroll fragments.” Strictly speaking it should have been in Box 1, where most of the other fragments were found. The label indicated it came from Cave 4Q from Qumran.
Elizabeth carefully tipped it out. I looked at it through an illuminating magnifying glass. Unexpectedly, I thought I saw a faint, small lamed, the Hebrew letter ‘L’. But was that an illusion, some tricky stain? After all, these pieces were gifted to Reed for study, including destructive types of testing, precisely because they were blank.
After getting to sit with Box 1 in the library reading room, I decided I should closely inspect every single fragment for any possible faded letters. There seemed some definite indications. But what to do? Since finding text was not our primary focus and we had a lot of other important work, it seemed we should just park this for the future.
We knew there were some very good results now possible through multispectral imaging. But we had no budget for it, and it was expensive. We had to use the budget as planned. It wasn’t until the end of the grant period, with other studies done, that we had enough left over to commission multispectral imaging.
We agreed to use the remaining project funds for this, and institutionally it was permitted. We wondered about taking the fragments to Israel for the study, given the Israel Antiquities Authority’s proven expertise, but Elizabeth informed us that the John Rylands Library had the capability in house.
Enter Gwen Riley Jones, the University of Manchester imaging manager. Everything listed as Dead Sea Scrolls over 1cm was subjected to multispectral study. In the end, six pieces were selected for full analysis. Of these, four turned out to have readable text, including “4Q22”. We could finally see that lamed plainly, and more besides. On one of the fragments, the word Shabbat (Sabbath) is clearly visible.
We are working with text experts to identify the fragments, and have made progress. But our work does not stop here. We are now planning further study of the Reed materials, which also include textiles, string, pottery and papyrus, to see what else these little bits and pieces can tell us, and more of these will be subjected to scientific tests and multispectral imaging. When we draw up our next budget, whatever we think we probably don’t need will be top of the list.
Estimating the chance of getting a message from life beyond Earth, say within the next decade, isn’t easy. Even the best experts are reluctant to offer precise odds.
“Anybody who gave you a figure would be talking about religion, not science,” says Jill Tarter, the astronomer who has spent most of her life pursuing the quest to find signals from alien life.
And even if you did get an estimate for that probability, it wouldn’t mean much. (After all, the San Francisco 49ers had a 95 percent chance of winning the Super Bowl with under 8 minutes to go in the game — and still lost.)
But however small the probability of seeing a signal from E.T. is, those chances are soon going to be a lot better than they have been in the past.
Sure, after decades of listening, there is still no message. But with more data to sift through, and new technologies with superior search capabilities, odds of hearing from E.T. are rapidly improving. If the probability in the decade 2011–2021 were x percent, it’s going to be 1,000 times x in the following decade, says Andrew Siemion, director of the Berkeley SETI Research Center. (SETI stands for Search for Extra-Terrestrial Intelligence.)
The reason for E.T. optimism stems largely from several new projects in the works, enhanced with advanced methods for discerning an actual message hidden in the static of cosmic cacophony.
Siemion, speaking in Seattle on February 15 at the annual meeting of the American Association for the Advancement of Science, reported a new release of data from Breakthrough Listen, a major enterprise for recording radio signals from space. Now available for others to analyze, the data dump contains 2 petabytes of information (to store that much, you’d need 2,000 of today’s typical PCs with their puny 1 terabyte hard drives).
Tarter, chair emeritus for SETI Research at the pioneering SETI Institute, described new search projects in the works at the institute, including Laser SETI. It’s a plan to train 96 cameras at a dozen locations around the world to keep a constant vigil for “intelligent” optical signals from space.
Another key driver of increased optimism is the abundance of places to look for life. Thanks largely to the Kepler space telescope’s successful explorations, astronomers now know of thousands of stars possessing planets — and have spotted dozens of rocky, Earth-like planets orbiting their stars at a distance likely to be temperate enough for liquid water, a hopeful indicator of habitability.
And of course, it is still possible that alien life might be hiding closer to home. While it’s very unlikely that any intelligent life abides in our solar system, microbial biology might be viable on moons such as Enceladus (Saturn) and Europa (Jupiter). Robots equipped with tools to extract microorganisms from alien soil and conduct chemical analysis could search for life on site. In the meantime, land- or space-based telescopes might detect signs of biological activity in the atmosphere of distant planets. Certain combinations of molecules in the right ratio would be surefire signatures of life in action.
“The ultimate breakthrough in exoplanetary science will be the detection of a biosignature in the atmosphere of a rocky habitable-zone exoplanet,” astronomer Nikku Madhusudhan noted last year in the Annual Review of Astronomy and Astrophysics. “Defining a unique biosignature remains a theoretical challenge, but several candidate molecules have been suggested.”
No one molecule (not even oxygen) would be a definite sign of life. But multiple life-related molecules detected in the atmosphere of a planet with other suitable conditions (such as a comfy temperature) would be strong evidence. Under Earth-like conditions, various molecules, such as oxygen, ozone, methane, carbon dioxide, nitrous oxide and ammonia could be taken as indicators of biological activity.
“Though there is no single ideal molecule, the combination of multiple species (e.g., oxygen and methane) may be a potential biosignature under the given conditions,” wrote Madhusudhan, of the University of Cambridge in England. “In this regard, a detection of oxygen and methane and/or nitrous oxide along with liquid water on a habitable-zone planet, i.e., an almost exact Earth analog, may be a sure sign of life.”
Finding primitive extraterrestrial life would be front-page news (or set a record for clicks), but the grand prize is reserved for the “I” in SETI — intelligent life. SETI searches seek signs of technology produced by extraterrestrial intelligence, most likely in the form of “unnatural” radio waves.
In fact, an alien looking for life in the cosmos might very well spot Earth as inhabited by exactly that method. In the 1990s, Carl Sagan and colleagues took advantage of the Galileo spacecraft’s pass by Earth to probe our planet for telltale signals of our existence. The giveaway was narrow-band radio emissions (abundant signaling at a single radio frequency).
“That as far as we know is an unmistakable indicator of technology, and an unmistakable indicator of life,” Siemion said at the AAAS meeting. “And indeed it is the most detectable signature of life on this planet as viewed from a distant vantage point.”
For now, Earth-based radio telescopes listening to the cosmos might hear a deliberate message, but couldn’t pick up TV shows or other radio-wave “leakage” from alien civilizations. But the Next Generation Very Large Array, now in the planning stage, would have the power to receive such unintentional communication, at least from nearby stars.
Perhaps alien civilizations may make more use of lasers than radio, though, which makes the prospect of Laser SETI appealing. But whether patterns are found in the radio or optical region of the electromagnetic spectrum doesn’t matter — such patterns could reveal intelligent activity regardless of their purpose, Siemion pointed out.
“We simply look for compression of electromagnetic energy in time or in frequency or some kind of modulation that is inconsistent with the astrophysical background or the instrumental background and consistent with something that technology could produce,” he said. “So it doesn’t matter if it’s a laser communication system being used to communicate with a spacecraft in some exoplanet system or it’s a giant laser light show that some very advanced civilization produced for the amusement of all the life in their system.”
In any event, receiving a message would be a monumental revelation about the viability of technological civilizations. Nobody knows whether a society that has developed advanced technology can long survive.
“The lifetime of a technological civilization … is a very difficult thing to predict,” said Siemion. “And of course, looking around at our own civilization you have reason to question what that term might be.”
On the other hand, a signal from space would almost certainly be from a civilization that has existed much longer than ours. (Otherwise the likelihood of listening in at exactly the right time would be prohibitively small.) So merely receiving a message might be considered hope that civilization on Earth might not be doomed after all.
Success in receiving a message raises other issues. For one thing, it’s a real possibility that an alien message is clearly an attempt to communicate, but in a language that no earthling could understand. And understood or not, a message received suggests the need to consider a reply. SETI researchers have long agreed that if a signal is detected, no response would be made until a global consensus had been reached on who will speak for Earth and what they would say. But that agreement is totally unenforceable, Tarter pointed out, and nobody has any idea about how to reach a global consensus on anything. (Perhaps the proper reply would just be “HELP!”)
Still, contemplating a response is for the moment a lesser priority than finding a message in the first place. And that might require help from nonhuman intelligence right here on Earth in the form of advanced computers. Recent developments in artificial intelligence research should soon make machine learning an effective tool in the E.T. search, Tarter said at the AAAS meeting.
“The ability to use machine learning to help us find signals in noise I think is really exciting,” she said. “Historically we’ve asked a machine to tell us if a particular pattern in frequency and time could be found. But now we’re on the brink of being able to say to the machine, ‘Are there any patterns in there?’”
So it’s possible that an artificially intelligent computer might be the first earthling to discern a message from an extraterrestrial. But then we would have to wonder, would a smart machine detecting a message bother to tell us? That might depend on whom (or what) the message was from.
“I think there’s something particularly romantic,” said Siemion, “about the idea of machine learning and artificial intelligence looking for extraterrestrial intelligence which itself might be artificially intelligent.”
By Tom Siegfried
Tom Siegfried is a science writer and editor in the Washington, DC, area. His book The Number of the Heavens, about the history of the multiverse, was published last fall by Harvard University Press.
This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. Sign up for the newsletter.
The more we learn about the science behind COVID-19, the more we are beginning to understand the vital role a single molecule in our bodies plays in how we contract the disease.
That molecule, Angiotensin Converting Enzyme 2, or ACE2, essentially acts as a port of entry that allows the coronavirus to invade our cells and replicate. It occurs in our lungs, but also in our heart, intestines, blood vessels and muscles.
And it may be behind the vastly different death rates we are seeing between men and women.
What is ACE2?
ACE2 is an enzyme molecule that connects the inside of our cells to the outside via the cell membrane.
In normal physiology, another enzyme called ACE alters a chemical, Angiotensin I, and converts it into Angiotensin II, which causes blood vessels to constrict. The tightening of the blood vessels leads to an increase in blood pressure.
That’s when the ACE2 molecule comes in: to counteract the effects of ACE, causing blood vessels to dilate and lowering blood pressure.
You may have seen illustrations of the virus that show distinct spikes around the surface of the virus, which form part of the “crown” or “corona” that gives the virus its name. These spikes are called S1 proteins, and they are what binds to the ACE2 molecule on our cells.
The virus is then able to invade the cell by a process called endocytosis – where the cell membrane engulfs the virus and internalises it within a bubble called an endosome.
Once inside the cell, the virus interacts with the host cells’ genetic machinery, taking advantage of the existing structure to replicate extensively.
SARS-CoV-2, the virus behind COVID-19, has a high binding capacity for ACE2 – between ten and 20 times more that of the original SARS virus. This means it is much easier for SARS-CoV-2 to get into human cells compared to the original coronavirus, making it more infectious overall.
ACE2 and COVID-19
But there is still conflicting evidence on the precise role ACE2 plays in coronavirus infections.
In some cases, it can actually be of benefit: ACE2 has been shown to reduce injury to the lung tissue in cases of the original SARS virus in mice by doing its job and causing blood vessels to dilate.
When it comes to the current coronavirus, early studies have shown that the introduction of a human-made form of ACE2 to human cells can block the early stages of infection by binding the spike protein, preventing it from entering the cells. ACE2 thus acts both as an entry port to cells but also as a mechanism to protect the lung from injury.
Similar figures have been observed in the US and 60% of deaths in Europe have been men.
We don’t yet fully understand why men die of COVID-19 in higher numbers than women, but it’s possible ACE2 plays a role.
A large study of two independent populations of heart failure patients was recently published in which ACE2 concentrations were found to be significantly higher in men than in women. This could explain why men may be more at risk than women of COVID-19 infection and of dying from the disease.
Recently, ACE2 has been identified in different cells of the heart. There are a greater number of ACE2 receptors on the surface of cells in the heart muscle in people with established cardiovascular disease compared to those without disease.
This may result in a greater number of virus particles entering the heart cells in COVID-19 patients with established heart diseases.
Given the role ACE2 plays in regulating blood pressure, there are also concerns about how it affects COVID-19 patients with hypertension. Men are more likely to have hypertension than women, especially under the age of 50.
Two particular drugs to reduce hypertension also affect ACE and ACE2. These are Angiotensin Converting Enzyme inhibitors, or ACEi, and angiotensin receptor blockers, known as ARBs. In animal studies, both of these drug types increase the production of the ACE2 enzyme and so may increase the severity of COVID-19 infection.
Small independent studies have examined the effect of these treatments on COVID-19 with conflicting results. However a recent study on the subject has demonstrated that COVID-19 patients with untreated hypertension have a higher risk of death compared to those being treated with ACEi or ARBs.
The role that ACE2 plays in COVID-19 is important in our understanding of the disease and could be used as a target for therapy. Drugs could be designed to block the receptor function of ACE2, but also there is promise in using the molecule itself in preventing entry of the virus into cells.
This would protect organs such as the lung, heart, kidney and intestine from extensive damage, and hopefully reduce mortality.
The economic fallout from the COVID-19 pandemic has caused an unprecedented crisis in journalism that could decimate media organizations around the world.
The future of journalism — and its survival — could lie in artificial intelligence (AI). AI refers “to intelligent machines that learn from experience and perform tasks like humans,” according to Francesco Marconi, a professor of journalism at Columbia University in New York, who has just published a book on the subject: Newsmakers, Artificial Intelligence and the Future of Journalism.
Marconi was head of the media lab at the Wall Street Journal and the Associated Press, one of the largest news organizations in the world. His thesis is clear and incontrovertible: the journalism world is not keeping pace with the evolution of new technologies. So, newsrooms need to take advantage of what AI can offer and come up with new a business model.
For Marconi, journalists and media owners are missing out and AI needs to be at the heart of journalism’s business model in the future. As a professor of journalism at the Université du Québec à Montréal, I have been closely following the evolution of this profession since 1990, and I am mostly in agreement with him.
In Canada, The Canadian Press news agency is, for example, one of the rare media outlets to use AI in its newsrooms. It has developed a system to speed up translations based on AI. The _Agence France-Presse_ news agency (AFP) also uses AI to detect doctored photos.
AI does not replace journalists
Artificial intelligence is not there to replace journalists or eliminate jobs. Marconi believes that only eight to 12 per cent of reporters’ current tasks will be taken over by machines, which will in fact reorient editors and journalists towards value-added content: long-form journalism, feature interviews, analysis, data-driven journalism and investigative journalism.
At the moment, AI robots perform basic tasks like writing two to six paragraphs on sports scores and quarterly earnings reports at the Associated Press, election results in Switzerland and Olympic results at the Washington Post. The outcomes are convincing, but they also show the limits of AI.
AI robots analyzing large databases can send journalists at Bloomberg News an alert as soon as a trend or anomaly emerges from big data.
AI can also save reporters a lot of time by transcribing audio and video interviews. AFP has a tool for that. The same is true for major reports on pollution or violence, which rely on vast databases. The machines can analyze complex data in no time at all.
Afterwards, the journalist does his or her essential work of fact-checking, analyzing, contextualizing and gathering information. AI can hardly replace this. In this sense, humans must remain central to the entire journalistic process.
In this sense, AI is part of a new business model based on breaking down media silos. There needs to be a symbiosis in the sense of establishing a “close collaboration” between the editorial staff and other media teams such as engineers, computer scientists, statisticians, sales or marketing staff.
In a newsroom, more than ever before, databases must be used to find stories that are relevant to readers, listeners, viewers and internet users.
And there are already various AI tools available to detect trends or hot topics on the internet and social media. These tools can also help newsrooms distribute content.
Beware of bias
Of course, newsroom size must be taken into account. A small weekly or a hyper-local media organization may not have the means to act quickly in adopting AI. But for the others, it’s important to start taking action right away. Journalists need to be better trained and begin to work with start-ups and universities to get the best out of this. AI is not a fad. It is here to stay.
Take the current example of COVID-19. This is an opportunity to analyze public health data to make connections, analyze and dig into the data neighbourhood by neighbourhood and street by street. AI can help with that. But it takes well-trained data reporters to do this work.
AI has also helped developing systems for detecting fake videos (deepfakes) and fake news, which are of course supported by experienced journalists from Reuters and AFP, for example.
In this sense, the transformation of newsrooms is only just beginning and Marconi’s essay is a must-read for identifying survival scenarios for media organizations and journalists. Because that’s what it’s all about. We need to better equip our newsrooms and completely rethink the workflow to achieve better collaboration and better content that will attract new and paying subscribers.
There are few bands whose influence was such that it can unequivocally be said that modern music would sound different without them. Kraftwerk, co-founded by Florian Schneider, whose recent death at the age of 73 was announced on May 6, was one such act. The band left an indelible imprint on the sound of popular music by bringing synthesised instruments to the forefront and electronics into the mainstream.
Schneider trained as a flautist at the Dusseldorf Conservatory, which might seem an odd background for a musician whose work did so much to shape the synth-pop and electronic dance music of the 1980s and beyond. But he and band-mate Ralf Hütter – an alumnus of the same music school – exemplified an exploratory approach to music making that traverses musical fields.
Emerging initially from an experimental milieu, their early albums were free-form improvisations that mixed electronic and traditional instruments. Alongside other German electronic acts, including Can and Neu!, they came to represent “krautrock” (as English critics dubbed it) or “Kosmische Musik” (‘cosmic music’, a term used by the German muscians).
The big breakthrough for Kraftwerk (the name means “power plant”) came with the release in 1974 of their fourth album Autobahn. The title track was a sonic depiction of the modernity of long-distance highway travel in their native Germany. Imbued with the sound effects of cars and horns, you could find distant echoes in the lyrics of the driving songs of Beach Boys and Chuck Berry. The album was a Top 10 hit, in Germany, the US and the UK, with a radio edit of the title track – 21 minutes long on the album – confounding expectations by charting as a single in the UK, US, Australia and the Netherlands.
Although some acoustic instruments could still be heard, Autobahn saw the band’s line-up stabilise around Schneider, Hütter and percussionists Wolfgang Flür and Karl Bartos. Its sound crystallised into something precise, evocative, human and yet simultaneously uncanny, laid over rhythmic grooves created with customised electronic instruments.
Influencing the influencers
While subsequent albums, including Radioactivity, Trans-Europe Express and The Man Machine, performed respectably – if not earth-shatteringly – in the commercial realm, Kraftwerk’s true impact was not so much about blazing a trail through the charts as expanding the parameters of popular music and opening up the ears of a generation of innovators to new possibilities. David Bowie’s late 1970s albums recorded in Berlin were heavily indebted to Kraftwerk, and he name-checked its co-founder on V-2 Schneider from Heroes.
Electronically synthesized instruments weren’t new, but had often been regarded as the preserve of experimenters on the commercial fringe, of the soundtrack artists in the more rarefied environs of the BBC Radiophonic Workshop, or as a kind of novelty. Their presence in rock music was tolerated, but rarely celebrated or centralised until Kraftwerk.
Schneider and Hütter paved the way for pop that used electronics as a foundation, rather than a garnish, and cleared the way for the likes of Gary Numan, Depeche Mode and the Human League in the 1980s.
But their shadow was cast much wider than the straight line of synth-driven pop. The exactitude of their tracks, and sonic distinctiveness, made them ideal fodder for the sampling that was emerging as a recording practice. Their songs Numbers and Trans-Europe Express served as the lynchpin of Afrika Bambaata’s Planet Rock at the roots oft the roots of hip-hop. Likewise, techno pioneer Derrick May has been explicit about their extensive influence on the formation of the genre. He would recall their popularity with the originators of techno in Detroit: “They were doing this thing that was from another planet … everybody latched onto Kraftwerk.”
Enriching pop’s sonic vocabulary
Key to their impact, and their work, was that they operated at a tangent to the pop world, as they had with the world of classical music. Their robotic stage act allowed them to eschew the celebrity game and the band, Schneider in particular, tended to be reticent about giving interviews in later years. Running their own studio: Kling Klang – their “electronic garden” as they called it – along with control of their business affairs allowed them to exercise aesthetic autonomy. As they told biographer Pascal Bussy in 2004:
We have invested in our machines, we have enough money to live, that’s it. We can do what we want, we are independent, we don’t do cola adverts, even if we might have been flattered by such proposals, we never accepted.
Their emphasis was on constructing sounds, first and foremost, with an omnivorous approach to source materials and subject matter. “We make compositions from everything,” Hütter told journalst Sylvain Gire. “All is permitted, there is no working principle, there is no system.” Mass appeal, it turned out, was a byproduct.
There’s a degree of irony in a band so tangentially concerned with pop so definitively reshaping it. Their singular approach has yet to be replicated, even as its echoes resound across pop, rock and dance music.
What makes them distinctive is that they didn’t just stand at a crossroads between different generic approaches, but uncovered those pathways, growing popular music’s sonic vocabulary and revealing its boundless capacity for incorporating new ideas.
Ireland, like many countries, has seen hugely increased levels of unemployment as a result of the measures taken to slow the spread of COVID-19. Figures released for March 2020 show that unemployment rates have risen from a modest rate of 5.4% to 16.5% when adjusted to take account of those who have become unemployed as a result of the crisis:
This rate is likely to have risen again since these figures were released. The recently published government draft stability programme is predicating a peak rate of 22% unemployment; peaking in the second quarter of the year, before gradually reducing as containment measures are eased. Of course, this is an employment forecast made in a very uncertain political and economic climate and just how this will actually play out in respect to the Irish labour market is very difficult to accurately predict.
The pandemic unemployment payment is a wage replacement payment, paid at a rate of €350 (£308) per week. The payment was initially established at a rate of €203, which mirrored the basic adult rate of payment for Irish welfare recipients across already existing payments. However, it was quickly increased to the current €350.
It is available to anyone who has lost their employment or who has been temporarily laid off due to COVID-19, including part-time workers. It is also available to those who were self-employed but who have had to cease working due to COVID-19, and to certain categories of welfare recipients who may also have been working.
This support is expected to be a time-limited payment, lasting for a period of 12 weeks. But it has, in essence, created a two-tier welfare system in Ireland, at least in the short term. One group of welfare recipients is being paid far above what another group receives, raising questions of who is seen to deserve support.
The temporary wage subsidy scheme functions a little differently. It is targeted at employers in a bid to keep employees linked to their places of employment where possible. In effect, it allows employers to pay their employees during the crisis by offering them a government subsidy for wages. Since April 15 subsidies have been offered on a tiered basis of up to 85% of earnings, depending on how much employees are normally paid. The scheme has not been as highly subscribed to as the pandemic unemployment payment. It is also expected to last 12 weeks.
Looking again at the March figures, it is clear that a sizable number of people are currently reliant on both payment types. At the time, more than 513,000 people were registered for support. These figures have very likely increased since the most recent reports. The government draft stability programme reported on April 20 that 584,000 people had registered.
Radical thinking needed
Yet the policies these figures represent are in effect only temporary solutions to what, in the absence of definitive treatments or a vaccine for COVID-19, is likely to be a much longer-term problem. It is also worth noting that in a world where movement is greatly restricted, the usual safety valve of relieving pressure through high levels of economic migration is unlikely to be an option. Therefore, longer term and arguably radical solutions will be required.
One policy option that has arguably gained more traction as a result of COVID is universal basic income. Pre-COVID survey data from Ireland suggests that this might be something people could support in a post-COVID future.
The European Social Survey indicated before the outbreak that when respondents in Ireland were asked if they would welcome the idea of a basic income, 46.2% were in favour and 9.5% were strongly in favour. With no guarantee of a stable labour market for some time to come, it is certainly worth considering whether this is a way to help people over the longer term.
A further possibility might be the adoption of a jobs-sharing initiative through a reduced working week. If everybody works fewer hours, there are potentially more jobs to go round. Whatever choices are made, it would be a mistake to think we can just return to “normal”. Frankly in an Ireland where, in 2019, 122,800 workers were scraping by on a minimum wage of €9.80 per hour or less and where the “at risk of poverty” rate stood at 14%, it is hard to know why anyone would want to.
Before lockdown, US commute times reached record levels and most UK workers spent more than a year of their lives travelling to and from work. People tell me that a hybrid strategy of working from home two days a week, is one ideal scenario.
Those eager to go back to the office will have to wait. Many will need to work from home for weeks or months to come. The situation is fluid, but governments are drawing up plans for workers to stagger working times, so public transport is not overwhelmed.
The genie is out of the bottle, and commuting is not going back to how it was.
Research repeatedly shows that sending out-of-hours emails is not only bad etiquette – but creates a coercive work culture that requires people to be available 24/7. Social scientists argue this turns us into worker/smartphone hybrids and causes stress and burnout. Expecting quick answers to email is increasingly seen as bullying.
Many now realise that colleagues might need to work flexibly due to caring responsibilities. Lockdown has encouraged a new acceptance of flexibility. But this shouldn’t extend to having a culture that expects people to be available all the time.
3. Video calls will be limited
Zoom calls will remain part of our lives – but we will change and adapt how we use them. Research shows that video calls are more draining and tiring than in-person meetings.
While video calls are appropriate for some meetings, we don’t need to use them for all our communication. Research suggests many are shifting back to phone calls – which as one manager explained to me “feels more spontaneous and flows better”.
4. More co-working spaces will emerge
Workers forced to continue working from cramped living spaces are desperate for alternatives. When lockdown lifts they will turn to the cafes and co-working spaces that are still in business. Before COVID-19 hit, co-working spaces were projected to increase more than 40% worldwide.
The paradox of remote working is that people crave the flexibility but know that being around others boosts productivity. My research shows that over time remote workers crave the physical closeness that comes with just being alongside other people. It’s exactly why in 2017 IBM pulled many employees back into the office, despite having previously published a 2014 white paper in support of remote working.
Local co-working spaces, as opposed to big investor-funded brands such as WeWork, will do well. Independent co-working spaces in some areas were thriving before COVID-19 – they may become more mainstream if they survive lockdown.
5. Could we become part-time digital nomads?
Digital nomads are extreme remote workers that post Instagram stories from exotic locations. Right now, that lifestyle seems unrelatable, impossible and to many unethical.
Nonetheless, many decently paid workers in New York, London and Paris are stuck in uncomfortably small flats, dreaming of escape from lockdown. As a housing manager recently confided to me: “London living without nightlife and culture, isn’t fun. Everyone wants to escape to somewhere outdoorsy when allowed. I’m not sure I approve but it’s understandable.”
For now, remote working from different locations is not allowed. But the allure of relocating to a picturesque location remains – and Brian Chesky, CEO of AirBnB, is banking on it. He sees COVID-19 as a business opportunity and told Bloomberg: “People are realising they can work remote … that’s a huge opportunity.”
The cost of 3D printers has dropped low enough to be accessible to most Americans. People can download, customize and print a remarkable range of products at home, and they often end up costing less than it takes to purchase them.
From rapid prototyping to home factory
Not so long ago, the prevailing thinking in industry was that the lowest-cost manufacturing was large, mass manufacturing in low-labor-cost countries like China. At the time, in the early 2000s, only Fortune 500 companies and major research universities had access to 3D printers. The machines were massive, expensive tools used to rapidly prototype parts and products.
More than a decade ago, the patents expired on the first type of 3D printing, and a professor in Britain had the intriguing idea of making a 3D printer that could print itself. He started the RepRap project – short for self-replicating rapid prototyper – and released the designs with open-source licenses on the web. The designs spread like wildfire and were quickly hacked and improved upon by thousands of engineers and hobbyists all over the world.
Many of these makers started their own companies to produce variants of these 3D printers, and people can now buy a 3D printer for US$250 to $550. Today’s 3D printers are full-fledged additive manufacturing robots, which build products one layer at a time. Additive manufacturing is infiltrating many industries.
My colleagues and I have observed clear trends as the technology threatens major disruption to global value chains. In general, companies are moving from using 3D printing for prototyping to adopting it to make products they need internally. They’re also using 3D printing to move manufacturing closer to their customers, which reduces the need for inventory and shipping. Some customers have bought 3D printers and are making the products for themselves.
This is not a small trend. Amazon now lists 3D printing filament, the raw material for 3D printers, under “Amazon Basics” along with batteries and towels. In general, people will save 90% to 99% off the commercial price of a product when they print it at home.
Coronavirus accelerates a trend
We had expected that adoption of 3D printing and the move toward distributed manufacturing would be a slow process as more and more products were printed by more and more people. But that was before there was a real risk of products becoming unavailable as the coronavirus spread.
The value of industrial commodities continues to slide because the coronavirus has put a major dent in demand as manufacturers shut down and potential customers are quarantined. This will limit people’s access to products while increasing their costs.
The disruptions to global supply chains caused by strict quarantines, stay-at-home orders and other social distancing measures in industrialized nations around the world present an opportunity for distributed manufacturing to fill unmet needs. Many people are likely, in the short to medium term, to find some products unavailable or overly expensive.
In many cases, they will be able to make the products they need themselves (if they have access to a printer). Our research on the global value chains found that 3D printing with plastics in particular are well advanced so any product with a considerable number of polymer components, even if the parts are flexible, can be 3D printed.
Metal and ceramic 3D printing is already available and expanding rapidly for a range of items, from high-cost medical implants to rocket engines to improving simple bulk manufactured products with 3D printed brackets at low costs. Printable electronics, pharmaceuticals and larger items like furniture are starting to become available or will be in the near future. These more advanced 3D printers could help accelerate the trend toward distributed manufacturing, even if they don’t end up in people’s homes.
There are some hurdles, particularly for consumer 3D printing. 3D printing filament is itself subject to disruptions in global supply chains, although recyclebot technology allows people to create filament from waste plastic. Some metal 3D printers are still expensive and the fine metal powder many of them use as raw material is potentially hazardous if inhaled, but there are now $1,200 metal printers that use more accessible welding wire. These new printers as well as those that can do multiple materials still need development, and there’s a long way to go before all products and their components can be 3D printed at home. Think computer chips.
When my colleagues and I initially analyzed when products would be available for distributed manufacturing, we focused only on economics. If the coronavirus continues to disrupt supply chains and hamper international trade, however, the demand for unavailable or costly products could speed up the transition to distributed manufacturing of all products.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.