Month: February 2019

  • 'Spectre' flaw returns to haunt security experts

    Spectre 1

    A serious flaw in the security of some computer chips, known as Spectre, is impossible to fix with software alone, Google experts claim.  The flaw is so large that computers of the future will need to be redesigned in order to avoid the issue.

    Spectre affects chips manufactured by Intel, AMD and ARM and are used in almost every single one of the world's smartphones.  It leaves devices exposed to programmes that can steal data, passwords and emails.  When Spectre was discovered and its ubiquity was noticed, tech firms desperately scrambled to find a solution.

    Spectre 2

    It was found that it decreases device speed by up to 30 per cent and Google introduced a feature to isolate each individual page on Chrome to limit the vulnerability.  There has yet to be a comprehensive solution to the issues and no fix for the fundamental issue has been found.

    One manifestation of the issue, known as Speculative Store Bypass, has now been labelled as irreparable by Google's experts.  Ben Titzer, co-author of the study, told New Scientist: 'The entire field of computing missed this.'

    Spectre exploits a weakness in a feature known as speculative execution which improves processing speed.

    Experts are now resigned to the fact there may never be a simple patch or fix and the only true way to leave Spectre behind is to change the way machines are made.

    Exact details of what can be stolen remain a mystery so it is impossible to gauge the effectiveness of previous software fixes.  Chips make guesses about future calculations, which are then discarded if incorrect.

    The research is available on pre-print site arXiv.

    Extracted from: www.dailymail.co.uk

  • No wonder dogs are man's best friend!

    Dogs 6

    It is said you cannot teach an old dog new tricks.  But, when it comes to personality, it seems dogs continue making progress throughout their lifetime.

    A new study has found that dogs’ personalities may change over time – and even tend to line up to match their owner’s.  The findings upend previous assumptions that dogs’ personalities are generally unchanging due to the overall stability of their lives.  According to the researchers, the results suggest dogs experience personality changes similar to how humans do over the course of their lives.

    Lead author William Chopik, professor of psychology at Michigan State University said: ‘When humans go through big chances in life, their personality traits can change.  We found that this also happens with dogs – and to a surprisingly large degree.  We expected the dogs’ personalities to be fairly stable because they don’t have wild lifestyle changes [like] humans do, but they actually change a lot.  We uncovered similarities to their owners, the optimal time for training, and even a time in their lives that they can get more aggressive toward other animals.’

    In the study led by Michigan State University, the researcher surveyed owners of more than 1,500 dogs.  This included 50 different breeds, with both male and female dogs aged just a few weeks to 15 years old.

    Dog owners were given questionnaires about their own personalities as well as their dogs’, the researchers say.  And, this revealed some similarities.

    Chopik said: ‘We found correlations in three main areas: age and personality, in human-to-dog personality similarities and in the influence a dog’s personality has on the quality of its relationship with its owner.  Older dogs are much harder to train; we found that the “sweet spot” for teaching a dog obedience is around the age of six, when it outgrows its excitable puppy stage but before it’s too set in its ways.’

    Humans with extroverted personalities tended to rate their dogs as excitable and active, while dog owners with higher rates of negative emotions were more likely to rate their dogs as fearful and less responsive to training.

    Agreeable dog owners were often found to have dogs that were less aggressive towards both animals and people.

    According to the researcher, the findings tap into the idea of ‘nature versus nurture’ – a concept commonly used in the discussion of human personality.

    In future studies, Chopik plans to focus on the effect a dog’s home environment can have on its behaviour.

    Chopik said: ‘Say you adopt a dog from the shelter.  Some traits are likely tied to biology and resistant to change, but you then put it in a new environment where it’s loved, walked, and entertained often.  Now that we know dogs’ personalities can change, next we want to make a strong connection to understand why dogs act – and change – the way they do.’

    Extracted from: www.dailymail.co.uk

  • Privacy fears as bosses deploy high-tech fitness trackers to spy on employees

    Tracker 1

    Soon, your boss may want to keep an eye on you in more ways than just whether or not you are at your desk.

    A growing number of employers are urging their staff to wear company-provided fitness trackers with the incentive that they'll receive quarterly payouts for reaching step goals, in addition to other rewards like insurance reimbursements.  In exchange, bosses are supplied with detailed data on their employees, ranging from their daily steps, hours spent sitting down, heart rate and sleep quality, according to the Washington Post.

    The practice has raised privacy concerns among many, while creating a totally new level of interaction between workers and their employers.

    Lee Tien with consumer privacy advocate the Electronic Frontier Foundation, told the Post: 'The more that employers know about their employees' lives, especially outside the workplace, off-duty hours, the more potential control or effects they have on their lives in the first place.  It's quite possible there will be effects on whether you are retained, promoted, demoted - who is first to be laid off.'

    Tracker 2

    The trackers are provided to staff either at no cost or via a small fee through their company's insurance provider.  Only employees who voluntarily opt-in take part in the program.  The device tracks their fitness activity and then sends this data to an app on their boss' phone.

    Many employers say they began offering the trackers to get employees to stay fit and save on health-care costs, but it's not yet clear whether this approach makes an impact, the Post explained.

    Some bosses push their employees more than others, with some staffers reportedly getting calls from their superior congratulating them on reaching step goals.

    Devicemakers like Fitbit even added a call service to remind employees of their health targets.

    Adam Pellegrini, senior vice president of Fitbit Health Solutions, told the Post: 'Sustained behavior change is really the focus.  Through the system, we can actually see who is not hitting their goals, who is not adhering to that action plan.'

    Approximately 20 per cent of employers who provide health insurance said they collected data from wearable devices in 2018, which is a marked increase from the 14 per cent who did the year prior.

    This data is often not only shared with bosses, but also the devicemaker, health insurance companies and other parties, the Post said.  Because it's voluntarily shared with these people, the data isn't protected under the Health Insurance Portability and Accountability Act (HIPAA), a law that protects health records from public disclosure.  For this reason, employees should carefully consider before volunteering to wear a company-provided fitness tracker.

    'The Fitbit or Apple Watch applications... may yield clues to things about you that you are not even aware of, or not ready for other people.   Individuals and consumers who are buying these devices don’t understand that is a potential consequence to know,' Tien told the Post.

    Extracted from: www.dailymail.co.uk

  • Your Google security system could be spying on you!

    Google Nest 1

    Google found itself at the centre of a new privacy scandal on 19 February, after it emerged that it has buried a secret microphone in its Nest Secure alarm system.  The microphone is not listed on the specifications for the Nest device, which is designed to keep users safe and protect their privacy and belongings.

    It came to light after Google launched a software update allowing users to control the Nest devices with their voices.  The voice activation requires a microphone to work.

    However, the web giant insisted that it had kept the device a secret by accident, as it apologised to customers.

    A spokesman said: 'The on-device microphone was never intended to be a secret and should have been listed in the tech specs.  That was an error on our part.'

    The tech firm added that the microphone was never switched on prior to the software update, but that it included the feature in order to have the potential to pick up the sound of glass breaking.

    A spokesman added: 'Security systems often use microphones to provide features that rely on sound sensing.  We included the mic on the device so that we can potentially offer additional features to our users in the future, such as the ability to detect broken glass.'

    The Nest Security alarm system containing a microphone has been sold throughout the US, but is yet to go on sale in Europe.  Security campaigners in Britain said that the scandal confirmed many of its worst fears.

    Silkie Carlo, director of the privacy lobby group Big Brother Watch, said: 'This appears to be deceptive rather than a 'mistake', which is incredibly damaging for public trust in Google.  Many of our worries about smart home devices appear to be proving true.  This market is normalising the disturbing notion of tech giants constant listening within the privacy of our homes.  Google should be held to account for wrongly advertising this product.'

    She added: 'It is hard to believe Google cares about people's privacy after selling a security product with a secret microphone in it.'

    The revelation that Nest Security contains a hidden microphone is not the first time that Google's smart home devices have come under fire.  Last year, a so-called 'white hat'– a hacker who sets out to do good – managed to take control of another man's Nest Security device.  The white hat then warned him about the breach by using the device to speak directly into his home.  Meanwhile, a mother in New York found that her Nest home monitoring system was hacked by a stranger – who started talking to her five-year-old son.  The stranger quizzed the boy on how he travelled home from school.

    Extracted from: www.dailymail.co.uk

  • Experts warn of 'thriving illicit trade' in personal information on the dark web

    Hackers 6

    Someone could be logging into your Facebook account for as little as $9, or tweeting as you for much less, according to the latest trading prices on the dark web.

    Sale prices for stolen IDs, personal information, and hacked accounts in both the US and UK have emerged as part of a new online security study.

    Login details for some sites are being traded for under $1.30, including Ticketmaster and Skype.  The most expensive uncovered were personal banking details, with bank card logins costing an average of $460.

    The revelations come from Top10vpn.com, which has just released its latest figures for stolen personal information prices on the 'dark web'.  It published the latest prices as a price index for both the US and UK.

    Simon Migliano, Head of Research at top10vpn.com called the online market in personal data a 'thriving illicit trade in stolen personal information'.  He noted that while someone's 'full online identity' can cost as much as $1,050, individual account information are on sale for single figures.

    In the UK, streaming sites have recently become popular targets for cyber-criminals, and their prices are increasing.   Netflix accounts are going for around $10.50, together with Uber and Fortnite data.  One of the cheapest was Twitter, which only costs $2, which is still a 28 per cent increase on last year's price.  Facebook saw an 86 per cent uptick in worth this year, but still only costing just under $9.

    Dark web UK market price index  February 2019 
    Name of website  Average Price  Avg. Price Change
    British Airways £31.94 +375%
    Morrisons £15.95 N/A
    Amazon £14.53 +114%
    Apple £8.67 -21%
    Netflix £8.19 +37%
    Uber £7.61 +52%
    Facebook £6.96 +86%
    Airbnb £4.78 -16%
    Nando's £2.44 N/A

    The price of data from British Airways has shot up since its mass data breach from September last year as well as Facebook following its Cambridge Analytica scandal.

    Mr Migliano added: 'The short, scary answer is that some of your personal data is almost certainly already for sale on the dark web.  The first step is to find out which of your accounts have been stolen.'

    'Have I Been Pwned should be your first port of call, as it’ll help you find out which of your email accounts and old passwords have been compromised.  If you have been breached, change your passwords'.

    Referring to the situation of US price index, which is significantly different to the UK, Mr Migliano added: 'Hacked data is cheap on the dark web: most individual accounts sell for less than $15, even big names like Apple, Fortnite, Netflix and Airbnb.  Notable exceptions to the rule include Facebook and Amazon accounts, which have soared in value since last year.'

    Extracted from: www.dailymail.co.uk

  • Killer robots should be banned to prevent them wiping out humanity, scientists warn

    Robot Killer 1

    Killer robots should be banned to prevent them wiping out humanity, the world’s largest gathering of scientists was told on 14 February.

    While full-blown android soldiers remain the stuff of science fiction, advances in artificial intelligence mean machines with the power to select and attack targets without human input could soon be developed.

    Such robots represent the ‘third revolution’ in warfare after gunpowder and nuclear weapons, scientists and campaigners told the American Association for the Advancement of Science’s annual meeting in Washington DC.

    Mary Wareham, from the Campaign to Stop Killer Robots, said: ‘Bold leadership is needed for a treaty.  The security of the world and future of humanity hinges on achieving a ban on killer robots.’

    Robot killer 2

    Backers include UN Secretary-General Antonio Guterres, who has called autonomous weapons ‘politically unacceptable and morally repugnant’.

    Extracted from: www.dailymail.co.uk

  • Mountains buried 400 miles underground ‘could be BIGGER than Everest’

    Earth 65

    An underground 'mountain' has been discovered that lies 600 kilometers beneath the Earth's surface that's taller than Everest.

    A study by Princeton scientists into the boundary between the upper and lower mantle of the Earth have surprisingly found ridges and clefts that are potentially rougher than anything on Earth.  They are located at a boundary straight down into the earth from the planet's surface.

    Dr Wenbo Wu, one of the geophysicists on the paper said: 'In other words, stronger topography than the Rocky Mountains or the Appalachians is present at the 660-km boundary'.

    Earth 66

    Using wave data from a 8.2 magnitude earthquake in Bolivia, mountains and other topography were discovered on the base of the boundary.  The earthquake was the second-largest deep earthquake ever recorded and took place in 1994.

    The most powerful waves on the planet come from giant earthquakes that can generate shock waves which travel through the Earth's core to the other side of the planet in all directions and back again.  The data from the shock waves allow data scientists to study deep into the Earth by modelling wave data on the kind of topography that could have caused it to scatter in such a way.

    They do so using powerful computers such as Princeton's powerful Tiger Cluster to simulate the complicated behavior of scattering waves.

    Dr Wu, now a postdoctoral researcher at the California Institute of Technology, added: 'We know that almost all objects have surface roughness and therefore scatter light.  That's why we can see these objects — the scattering waves carry the information about the surface's roughness.  In this study, we investigated scattered seismic waves traveling inside the Earth to constrain the roughness of the Earth's 660-km boundary.'

    These studies can tell us a lot about the Earth's formation and how heat and material can travel through the Earth's its different layers.   But no one knows exactly how these huge jutting mountain structures may have evolved, but it's likely a result of movement of material and chemical mixing between the layers.

    Dr Wu said: 'The smoother areas of the 660-km boundary could result from more thorough vertical mixing, while the rougher, mountainous areas may have formed where the upper and lower mantle don’t mix as well.'

    Speaking to Dr Jessica Irving, who led the research said: “What’s exciting about these results is that they give us new information to understand the fate of ancient tectonic plates which have descended into the mantle, and where ancient mantle material might still reside.”

    The full study was published in Science.

    Extracted from: www.dailymail.co.uk

  • Creepy website uses AI to create 'deepfake' photos of humans who don't exist

    A new website lets users click through an endless library of fake human faces created by artificial intelligence.  Called ThisPersonDoesNotExist.com, the results are startlingly lifelike and may make you question what's real and what isn't.

    AI Images 2

    Every time a user refreshes their browser, the site spits out a randomly generated face, spanning from older men and women to even children.  A few images may appear glitchy or blurred in some areas, but most would make you think the subject is a real human being.

    Philip Wang, a software engineer at Uber, said he built the website using research first released by chipmaker Nvidia last year.  Nvidia researchers built an algorithm using a neural network called General Adversarial Networks (GAN) that's able to create customized and realistic looking faces.

    AI Images 3   AI Images 4

    GANs are able to learn from large sets of data to look for patterns and produce new data.  They supplied it with 7,000 images of human faces from Flickr.  It learned by starting with low-resolution images and recognizing more and more details as the resolution of each image increases, according to Lyrn.ai.

    In the case of Nvidia's algorithm, called StyleGAN, it's trained to generate random human faces, but the software is able to randomly generate any number of things, ranging from fake animals, anime characters and fonts, to even fake documents.

    Wang explained in a Facebook post: 'Faces are most salient to our cognition so I've decided to put that specific pretrained model up… Each time you refresh the site, the network will generate a new facial image from scratch from a 512 dimensional vector.'

    In all, the process takes only a matter of seconds before a seemingly real, yet fake face appears before the viewer.

    'Most people do not understand how good AIs will be at synthesizing images in the future,' Wang told Motherboard.

    A video explains in further detail how Nvidia's StyleGAN is able to generate fake human faces on demand.

    'Our generator thinks of an image as a collection of "styles," where each style controls the effects at a particular scale,' Nvidia researchers explained.

    These three categories include coarse styles, or the subject's pose, hair and face shape; middle styles, or their facial features and eyes; as well as fine styles, which includes the color scheme.

    AI Images 1

    In the image, the top row are the only legitimate photographs of real people, the rest have been computer generated. The AI uses various traits to create randomly generated people

    The researchers said: 'We can choose the strength to which each style is applied, with respect to an "average face".  By selecting the strength appropriately, we can get good images out every time (with slightly reduced variation.)'

    While ThisPersonDoesNotExist.com and Nvidia's research showcase the impressive abilities of AI, they also undoubtedly raise questions about the tech's possible implications.  For example, GANs have been used to create 'deep fakes,' or artificially manipulated videos that use AI to swap celebrity faces onto porn star bodies.

    The same technology has also been used to create digitally-altered videos of world leaders, including former President Barack Obama and Russian president Vladimir Putin.

    'The idea that someone could put another person's face on an individual's body would be like a homerun for anyone who wants to interfere in a political process.  This is now going to be the new reality, surely by 2020, but potentially as early as this year,' Virginia senator Mark Warner, who has led a crackdown on political ads on social media platforms, told CBS.

    Extracted from: www.dailymail.co.uk

  • Stunned metal detectorist unearths a 1,500-year-old Anglo-Saxon GOLD pendant

    Gold Pendant 3

    An amateur metal detectorist has compared finding a 6th century Anglo-Saxon pendant in a muddy field to 'winning the lottery'.  The shiny piece of gold was originally mistaken to be a 'chocolate coin' due to its immaculate preservation but experts proved it is a gold pendant from 1,500 years ago.

    Rachel Carter, 41, was searching a Kent field with her partner and her metal detector when she stumbled across the find.  After showing the coin to her partner they realised what she had mistook to be a 'piece of junk' was in fact an authentic piece of British history.

    Gold Pendant 1   Gold Pendant 2

    Ms Carter said: 'As soon as I put the detector down, I got a signal that was going mad so I dug down and pulled out this pendant.  It was only about five inches down and was so perfect and gold and new-looking that at first I thought it was a bit of junk - you'd think you could unwrap it and eat the chocolate from inside.  I went over to Ricky and said 'do you reckon this is anything?' and he was like "oh my God."

    Ms Carter and her partner, Ricky Shubert, say it was a sign from her mother who passed away nearly a year before the discovery.  She restarted the hobby after caring for her elderly mother and made the find at Christmas.

    'Some people in my club have been digging for 50 years and they say they've never seen anything like it.'

    Using her partner's metal detector, Ms Carter briefly scanned over a section of earth at her friend's farm before setting off to tackle the far side of the field.  But her partner told her she had missed a spot and told her to take another look.  She said: 'Ricky said 'hang on you said you wanted to try this part' and he encouraged me to come back and have another look.  My mum always said to me "one day you'll find something really special."  All I ever wanted was to find something gold and religious for her - because she was Catholic - and then I did.  It's like she sent this as a sign, saying "see? keep going."  We've been back a few times since then, and haven't found anything.  Usually you find caps, coins, that sort of thing - but we've found absolutely nothing since.  It's really weird.'

    The pendant was reported to a Kent Finds liaison officer and could be displayed in a museum with Rachel's name underneath it.  She said: 'It would be lovely to see it in a museum, with my name underneath it.  But to be honest, I'd rather keep it because it's absolutely amazing - finding it was like winning the lottery, without having known what the ticket was worth.'

    The couple hopes to find out how the pendant came to be at the farm near Marshside and more about the area's history.

    Andrew Richardson, outreach and archives manager at Canterbury Archaeological Trust, said it is a 'significant find.'  He added: 'My first impression is that it is a gold coin of 6th or early 7th century date, possibly an imported Frankish tremissis, that has been re-fashioned as a pendant.  We have seen these before in Kent. Imports of quantities of Byzantine and Frankish gold coinage into Kent were not infrequent, probably as gifts.  Anglo-Saxon England was not thought to be a coin using economy, so coins tended to either get melted down to make jewellery, or occasionally got refashioned as pendants, as is the case here.  During the early 7th century, the Kentish kings began to mint their own gold coins, known as thrymsas, and from then on a coin using economy developed.'

    He added: 'That stretch of the north Kent coast is certainly an area where there is plenty of evidence of prehistoric, Roman and Anglo-Saxon settlement.  Coastal erosion here means that considerable archaeology is being revealed along this stretch of shore, and I'd expect this to continue.  This is a significant find, and the finder has done the right thing by reporting it to the Finds Liaison Officer for Kent.'

    Extracted from: www.dailymail.co.uk

  • Researchers develop AI algorithm that can detect fake profiles on popular dating apps

    AI Dating Scam 1

    Scientists have developed an algorithm that can spot dating scams.  A team of researchers trained AI software to 'think like humans' when looking for fake dating profiles.  While the algorithm has only been deployed in a research setting, it could one day be used to protect users on popular dating services like Tinder and Match.com.  The study was conducted by a group of researchers from the University of Warwick and published recently.

    AI Dating Scam 2

    Researchers first trained the algorithm by supplying it with profiles that were already deemed fake.  From this, the AI was able to detect recurring elements between the profiles that might indicate it as being fake.  For example, the fake profiles might share the same phone number or IP address, as well as include stolen material, such as someone else's photo or user bio.  Additionally, many of the fake profiles used similar 'stylistic patterns of persuasive messaging,' not unlike the repeated language you might see across spam emails.

    After scanning all the fake profiles, the algorithm applied its knowledge to profiles submitted to online dating services and come to a conclusion on the probability of each profile being fake.   In total, only one per cent of the profiles it flagged as fake were genuine, according to the University of Warwick.

    The report doesn't say how successful it was at flagging genuine profiles or how many profiles it reviewed overall.  Still, the researchers say it bodes well for bringing the software to actual dating apps.

    'The aim is to further enhance the technique and enable it to start being taken up by dating services within the next couple of years, helping them to prevent profiles being posted by scammers,' the University of Warwick explained.

    They believe the algorithm is sorely needed in an industry where romance scams are on the rise.

    On 12 February, the Federal Trade Commission issued a notice saying that 'romance scams,' or scenarios where scammers trick love-lusting internet users into sending them money, cost victims an astonishing $143 million in 2018.  That's up from $33 million the previous year and making it the most costly type of consumer fraud reported to the FTC.

    Tom Sorrell, a co-author of the study, said in a statement: 'Online dating fraud is a very common, often unreported crime that causes huge distress and embarrassment for victims as well as financial loss.  Using AI techniques to help reveal suspicious activity could be a game-changer that makes detection and prevention quicker, easier and more effective, ensuring that people can use dating sites with much more confidence in future.'

    Extracted from: www.dailymail.co.uk