Jump to content

About This Club

Everything regarding Artificial Intelligence

  1. What's new in this club
  2. Give mind to the machines and at that rate you won't be able to compete with them. Tables would turn before you can even blink an eye.
  3. Researchers at the non-profit AI research group OpenAI just wanted to train their new text generation software to predict the next word in a sentence. It blew away all of their expectations and was so good at mimicking writing by humans they’ve decided to pump the brakes on the research while they explore the damage it could do. Elon Musk has been clear that he believes artificial intelligence is the “biggest existential threat” to humanity. Musk is one of the primary funders of OpenAI and though he has taken a backseat role at the organization, its researchers appear to share his concerns about opening a Pandora’s box of trouble. This week, OpenAI shared a paper covering their latest work on text generation technology but they’re deviating from their standard practice of releasing the full research to the public out of fear that it could be abused by bad actors. Rather than releasing the fully trained model, it’s releasing a smaller model for researchers to experiment with. The researchers used 40GB of data pulled from 8 million web pages to train the GPT-2 software. That’s ten times the amount of data they used for the first iteration of GPT. The dataset was pulled together by trolling through Reddit and selecting links to articles that had more than three upvotes. When the training process was complete, they found that the software could be fed a small amount of text and convincingly continue writing at length based on the prompt. It has trouble with “highly technical or esoteric types of content” but when it comes to more conversational writing it generated “reasonable samples” 50 percent of the time. In one example, the software was fed this paragraph: Based on those two sentences, it was able to continue writing this whimsical news story for another nine paragraphs in a fashion that could have believably been written by a human being. Here are the next few machine-paragraphs that were produced by the machine: GPT-2 is remarkably good at adapting to the style and content of the prompts it’s given. The Guardian was able to take the software for a spin and tried out the first line of George Orwell’s Nineteen Eighty-Four: “It was a bright cold day in April, and the clocks were striking thirteen.” The program picked up on the tone of the selection and proceeded with some dystopian science fiction of its own: The OpenAI researchers found that GPT-2 performed very well when it was given tasks that it wasn’t necessarily designed for, like translation and summarization. In their report, the researchers wrote that they simply had to prompt the trained model in the right way for it to perform these tasks at a level that was comparable to other models that are specialized. After analyzing a short story about an Olympic race, the software was able to correctly answer basic questions like “What was the length of the race?” and “Where did the race begin?” These excellent results have freaked the researchers out. One concern they have is that the technology would be used to turbo-charge fake news operations. The Guardian published a fake news article written by the software along with its coverage of the research. The article is readable and contains fake quotes that are on topic and realistic. The grammar is better than a lot what you’d see from fake news content mills. And according to The Guardian’s Alex Hern, it only took 15 seconds for the bot to write the article. Other concerns that the researchers listed as potentially abusive included automating phishing emails, impersonating others online, and self-generating harassment. But they also believe that there are plenty of beneficial applications to be discovered. For instance, it could be a powerful tool for developing better speech recognition programs or dialogue agents. OpenAI plans to engage the AI community in a dialogue about their release strategy and hopes to explore potential ethical guidelines to direct this type of research in the future. They said they will have more to discuss in public in six months. [OpenAI via The Guardian]
  4. something doesn’t “feel” right about this. 🙂
  5. 1507940147251-drlcss.mp4 Learn more at
      Hello guest!
  6. Talk about inciting the mobs!!! This video is very scary.
  7. We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response.
  8. Having had many eye surgeries, which were painless, but I was still awake and somewhat awake ..... I hate it when they have that military powered searchlight shining directly in your eye, and you hear the doctor say " ... oops ...".
  9. Doctors across the world are beginning to rely on artificial intelligence algorithms to help accelerate diagnostics and treatment plans, with the goal of making more time to see more patients, with greater precision. We all can understand—at least conceptually—what it takes to be a doctor: years of medical school lectures attended, stacks of textbooks and journals read, countless hours of on-the-job residencies. But the way AI has learned the medical arts is less intuitive. In order to get more clarity on how algorithms learn these patterns, and what pitfalls might still lurk within the technology, Quartz partnered with Leon Chen, co-founder of medical AI startup MD.ai, and radiologist Luke Oakden-Rayner, to train two algorithms and understand how it matches with a medical professional as it learns. One detects the presence of tumorous nodules, and the second gauges the potential of it being malignant.
      Hello guest!
  10. The "e-dermis" applied to the thumb and forefinger of a prosthetic handView gallery - 2 imagesResearchers have developed an "e-dermis" or electronic skin that could be applied to a prosthetic hand to give the wearer a sense of touch. By using electronic sensors that mimic the nerve endings in the body, the skin can convey both the senses of touch and of pain.The skin is made of a combination of fabric and rubber, into which the electronic sensors are embedded. The technology isn't invasive, but relays sensation through the wearer's skin using a method known as TENS, or transcutaneous electrical nerve stimulation – a process that needs hours of mapping of the subject's nerve endings.It's thought the technology could make sense of so-called phantom limb sensations in amputees – the name given to the feeling that a missing limb remains present. The researchers used EEGs to confirm that phantom-limb sensations were felt during stimulation via the electronic skin over the course of tens of hours of testing.According to the research paper, the subject mainly felt sensations of pressure along with some "electrical tingling" feelings. The subject reported feeling nothing more severe than an uncomfortable but tolerable pain. The researchers say the subject could report which fingers of the prosthesis were being stimulated "with perfect accuracy.""For the first time, a prosthesis can provide a range of perceptions from fine touch to noxious to an amputee, making it more like a human hand," senior author of the research Nitish Thakor explains in a press release. The desire to restore pain may seem counterintuitive, but it could be used to warn the wearer of damage."This is interesting and new, because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points," adds biomedical student Luke Osborn."After many years, I felt my hand, as if a hollow shell got filled with life again," says the researchers' principle (and anonymous) volunteer.At the moment the electronic skin is able to detect curvature and differentiate sharp objects, but in future could be adapted for temperature sensitivity. As well as helping prosthesis users, the researchers think the technology could be used to improve space suits, or to aid robots.We've reported on various touch-sensitive prostheses over the years, but this development shows just how far the technology has come, needing no invasive surgery, differentiating touch from pain, and being potentially applicable to any prosthesis.The skin is the work of a team of engineers at Johns Hopkins University and the Singapore Institute of Neurotechnology. The work has been published in the journal Science Robotics and can be read in full online.Sources: Johns Hopkins University, Science Robotics
  11. It's a small AI startup with about 40 employees.
      Hello guest!
  12. As you mull it over, consider this jaw-dropping report from China: JD.com, a Chinese e-commerce behemoth, claimed it could receive, pack, ship, and deliver 200,000 orders a day across China. But get this, it employs just four workers at the fulfillment center. And those employees' jobs? To service the robots that fulfill the orders... Final food for thought: In January, the World Economic Forum and Boston Consulting Group said that by 2026, over 1 million Americans could lose their jobs to automation. + There's always another side to the story: "AI Doesn't Eliminate Jobs, It Creates Them."
  13.  

×
×
  • Create New...

Important Information

Terms of Service Confirmation Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.