• Content count

  • Joined

  • Last visited

Community Reputation

18 Good

1 Follower

About SciTechPress

  • Rank
    Advanced Member
  • Birthday

Recent Profile Visitors

823 profile views
  1. peashooter85: Atomic Annie — The M65 Atomic Cannon, Designed in 1949 by the American Engineer Robert Schwarz, the M65 “Atomic Annie” was inspired by German railway guns used during World War II. The M65 however, was designed to deliver a nuclear payload to its target. The gun and carriage itself weighed around 85 tons, was manned by a crew of 5-7, and was transported by two specially designed towing tractors. At 280mm in caliber and capable of firing a projectile over 20 miles, the gun was certainly powerful enough as a conventional weapon, but the Atomic Annie was certainly no conventional weapon. In 1953 it was tested for the first time at the Nevada Test Site, where it fired a 15 kiloton nuclear warhead, creating a blast similar in size to the bombs dropped on Hiroshima and Nagasaki. After the successful test, 20 M65 cannons were produced for the US Army and deployed in Europe and Korea. They were almost always in constant motion so the Soviets never knew where they were and could not target them. While an interesting weapon, the Atomic Annie suffered from limited range, especially after the development of ballistic missiles which could strike a target from thousands of miles away. The last M65 Atomic Cannon was retired in 1963. Today only 8 survive, and are displayed in museums across the country. via
  2. neurosciencestuff: Contrary to popular belief, language is not limited to speech. In a recent study published in the journal PNAS, Northeastern University Prof. Iris Berent reveals that people also apply the rules of their spoken language to sign language. Language is not simply about hearing sounds or moving our mouths. When our brain is “doing language,” it projects abstract structure. The modality (speech or sign) is secondary. “There is a misconception in the general public that sign language is not really a language,” said Berent. “Part of our mandate, through the support of the NSF, is to reveal the complex structure of sign language, and in so doing, disabuse the public of this notion.” THE EXPERIMENT To come to this conclusion, Berent’s lab studied words (and signs) that shared the same general structure. She found that people reacted to this structure in the same way, irrespective of whether they were presented with speech or signs. In the study, Berent studied words and signs with doubling (e.g., slaflaf)—ones that show full or partial repetition. She found that responses to these forms shift, depending on their linguistic context. When a word is presented by itself (or as a name for just one object), people avoid doubling. For example, they rate slaflaf (with doubling) worse than slafmak (with no doubling). But when doubling signaled a systematic change in meaning (e.g., slaf=singular, slaflaf=plural), participants now preferred it. Next, Berent asked what happens when people see doubling in signs (signs with two identical syllables). The subjects were English speakers who had no knowledge of a sign language. To Berent’s surprise, these subjects responded to signs in the same way they responded to the words. They disliked doubling for singular objects, but they systematically preferred it if (and only if) doubling signaled plurality. Hebrew speakers showed this preference when doubling signaled a diminutive, in line with the structure of their language. “It’s not about the stimulus, it’s really about the mind, and specifically about the language system,” said Berent. “These results suggest that our knowledge of language is abstract and amodal. Human brains can grasp the structure of language regardless of whether it is presented in speech or in sign.” SIGN LANGUAGE IS LANGUAGE Currently there is a debate as to what role sign language has played in language evolution, and whether the structure of sign language share similarities with spoken language. Berent’s lab shows that our brain detects some deep similarities between speech and sign language. This allows for English speakers, for example, to extend their knowledge of language to sign language. “Sign language has a structure, and even if you examine it at the phonological level, where you would expect it to be completely different from spoken language, you can still find similarities. What’s even more remarkable is that our brain can extract some of this structure even when we have no knowledge of sign language. We can apply some of the rules of our spoken language phonology to signs,” said Berent. Berent says these findings show that our brains are built to deal with very different types of linguistic inputs. The results from this paper confirm what some scientists have long thought, but hasn’t truly been grasped by the general public—language is language no matter what format it takes. “This is a significant finding for the deaf community because sign language is their legacy. It defines their identity, and we should all recognize its value. It’s also significant to our human identity, generally, because language is what defines us as a species ” To help further support these findings, Berent and her lab intend to examine how these rules apply to other languages. The present study focused on English and Hebrew. via
  3. did-you-kno: In 1897, Indiana almost passed a bill to change the value of pi. An amateur mathematician decided he had proof that pi was not 3.14, but actually 3.2, so he convinced the state to take on the bill. It passed the House unanimously, made it through a Senate committee, and likely would’ve been approved if a professor from Purdue hadn’t been in town. After hearing the news, he went to the state- house, watched the debate, decided to intervene, and eventually convinced the Senate that the theory was nonsense. Source Source 2 via
  4. World’s first fully-manned hoverbike tested in Moscow via
  5. Using Virtual Reality to Train Aircraft Mechanics via
  6. In recent years, astronauts have reported their vision changing as a result of long-duration spaceflight. Pre- and post-flight studies of astronauts’ eyes showed flattening along the backside of the eyeball, and scientists hypothesized that the redistribution of body fluids that occurs in microgravity could be reshaping astronauts’ eyes by increasing the intracranial pressure in their skulls. A new study tested this hypothesis with the first-ever measurements of intracranial pressure during microgravity flights and during extended microgravity simulation (a.k.a. bedrest with one’s head pointed downward). The authors found that humans here on Earth experience substantial changes in intracranial pressure depending on our posture – while upright we experience much lower intracranial pressure than we do when we’re lying flat. In both microgravity flights and simulation, patients had intracranial pressures that were higher than earthbound upright values but lower than what is experienced when lying flat on Earth. Since we humans on Earth spend about 2/3rds of our time upright and 1/3rd prone, our bodies are accustomed to regular variations in intracranial pressure. In space, astronauts don’t receive that regular unloading of intracranial pressure we have when we’re upright. So now researchers suggest that it is the lack of daily variation in intracranial pressure that is the culprit behind astronauts’ vision changes – not the absolute value of the pressure itself. (Image credit: NASA; N. Alperin et al.; research credit: J. Lawley et al.) via
  7. forbes: Fly from Los Angeles to Sydney in Under Four Hours on this Sleek Supersonic Jet via
  8. WORLDS MOST FEARED US Air Force F-22 Aircraft ready to make the Russians jealous A great demonstration of the US Air Force F-22 Aircraft which is considered the worst nightmare for the S-400 missile air defense system. TYNDALL AIR FORCE BASE, Fla. - Four Tyndall F-22 Raptors and approximately 60 Airmen assigned to the 325 Fighter Wing arrived at Spangdahlem Air Base, Germany, today to train with allied air forces and U.S. services through mid-September. This first-ever F-22 training deployment to Europe is funded by the European Reassurance Initiative, and provides support to bolster the security of our NATO allies and partners in Europe. “This inaugural Raptor training deployment is the perfect opportunity for these advanced aircraft to train alongside other U.S. Air Force aircraft, joint partners and NATO allies,” said General Frank Gorenc, U.S. Air Forces in Europe and Air Forces Africa commander. The training will prove that 5th generation fighters can deploy successfully to European bases and other NATO installations while also affording the chance for familiarization flight training within the European theater. It will also give them the chance to conduct combat air training with different aircraft like U.S. F-15 Eagles and F-16 Fighting Falcons. “It’s important we test our infrastructure, aircraft capabilities and the talented Airmen and allies who will host 5th generation aircraft in Europe,” said Gorenc. “This deployment advances our airpower evolution and demonstrates our resolve and commitment to European safety and security.” Video Description Credit: Airman 1st Class Sergio Gamboa Video Credit: Airman 1st Class Nicolas Myers Video Thumbnail credit: Rob Shenk from Great Falls, VA, USA This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license. Licence link:… This Photo Modified by ArmedForcesUpdate via
  9. Seeking Immortality: Russian Scientists’ Hunt for Elixir of Life via
  10. neurosciencestuff: Cyber security and authentication have been under attack in recent months as, seemingly every other day, a new report of hackers gaining access to private or sensitive information comes to light. Just recently, more than 500 million passwords were stolen when Yahoo revealed its security was compromised. Securing systems has gone beyond simply coming up with a clever password that could prevent nefarious computer experts from hacking into your Facebook account. The more sophisticated the system, or the more critical, private information that system holds, the more advanced the identification system protecting it becomes. Fingerprint scans and iris identification are just two types of authentication methods, once thought of as science fiction, that are in wide use by the most secure systems. But fingerprints can be stolen and iris scans can be replicated. Nothing has proven foolproof from being subject to computer hackers. “The principal argument for behavioral, biometric authentication is that standard modes of authentication, like a password, authenticates you once before you access the service,” said Abdul Serwadda a cybersecurity expert and assistant professor in the Department of Computer Science at Texas Tech University. “Now, once you’ve accessed the service, there is no other way for the system to still know it is you. The system is blind as to who is using the service. So the area of behavioral authentication looks at other user-identifying patterns that can keep the system aware of the person who is using it. Through such patterns, the system can keep track of some confidence metric about who might be using it and immediately prompt for reentry of the password whenever the confidence metric falls below a certain threshold.” One of those patterns that is growing in popularity within the research community is the use of brain waves obtained from an electroencephalogram, or EEG. Several research groups around the country have recently showcased systems which use EEG to authenticate users with very high accuracy. However, those brain waves can tell more about a person than just his or her identity. It could reveal medical, behavioral or emotional aspects of a person that, if brought to light, could be embarrassing or damaging to that person. And with EEG devices becoming much more affordable, accurate and portable and applications being designed that allows people to more readily read an EEG scan, the likelihood of that happening is dangerously high. “The EEG has become a commodity application. For $100 you can buy an EEG device that fits on your head just like a pair of headphones,” Serwadda said. “Now there are apps on the market, brain-sensing apps where you can buy the gadget, download the app on your phone and begin to interact with the app using your brain signals. That led us to think; now we have these brain signals that were traditionally accessed only by doctors being handled by regular people. Now anyone who can write an app can get access to users’ brain signals and try to manipulate them to discover what is going on.” That’s where Serwadda and graduate student Richard Matovu focused their attention: attempting to see if certain traits could be gleaned from a person’s brain waves. They presented their findings recently to the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Biometrics. Brain waves and cybersecurity Serwadda said the technology is still evolving in terms of being able to use a person’s brain waves for authentication purposes. But it is a heavily researched field that has drawn the attention of several federal organizations. The National Science Foundation (NSF), funds a three-year project on which Serwadda and others from Syracuse University and the University of Alabama-Birmingham are exploring how several behavioral modalities, including EEG brain patterns, could be leveraged to augment traditional user authentication mechanisms. “There are no installations yet, but a lot of research is going on to see if EEG patterns could be incorporated into standard behavioral authentication procedures,” Serwadda said. Assuming a system uses EEG as the modality for user authentication, typically for such a system, all variables have been optimized to maximize authentication accuracy. A selection of such variables would include: The features used to build user templates. The signal frequency ranges from which features are extracted The regions of the brain on which the electrodes are placed, among other variables. Under this assumption of a finely tuned authentication system, Serwadda and his colleagues tackled the following questions: If a malicious entity were to somehow access templates from this authentication-optimized system, would he or she be able to exploit these templates to infer non-authentication-centric information about the users with high accuracy? In the event that such inferences are possible, which attributes of template design could reduce or increase the threat? Turns out, they indeed found EEG authentication systems to give away non-authentication-centric information. Using an authentication system from UC-Berkeley and a variant of another from a team at Binghamton University and the University of Buffalo, Serwadda and Matovu tested their hypothesis, using alcoholism as the sensitive private information which an adversary might want to infer from EEG authentication templates. In a study involving 25 formally diagnosed alcoholics and 25 non-alcoholic subjects, the lowest error rate obtained when identifying alcoholics was 25 percent, meaning a classification accuracy of approximately 75 percent. When they tweaked the system and changed several variables, they found that the ability to detect alcoholic behavior could be tremendously reduced at the cost of slightly reducing the performance of the EEG authentication system. Motivation for discovery Serwadda’s motivation for proving brain waves could be used to reveal potentially harmful personal information wasn’t to improve the methods for obtaining that information. It’s to prevent it. To illustrate, he gives an analogy using fingerprint identification at an airport. Fingerprint scans read ridges and valleys on the finger to determine a person’s unique identity, and that’s it. In a hypothetical scenario where such systems could only function accurately if the user’s finger was pricked and some blood drawn from it, this would be problematic because the blood drawn by the prick could be used to infer things other than the user’s identity, such as whether a person suffers from certain diseases, such as diabetes. Given the amount of extra information that EEG authentication systems are able glean about the user, current EEG systems could be likened to the hypothetical fingerprint reader that pricks the user’s finger. Serwadda wants to drive research that develops EEG authentication systems that perform the intended purpose while revealing minimal information about traits other than the user’s identity in authentication terms. Currently, in the vast majority of studies on the EEG authentication problem, researchers primarily seek to outdo each other in terms of the system error rates. They work with the central objective of designing a system having error rates which are much lower than the state-of-the-art. Whenever a research group develops or publishes an EEG authentication system that attains the lowest error rates, such a system is immediately installed as the reference point. A critical question that has not seen much attention up to this point is how certain design attributes of these systems, in other words the kinds of features used to formulate the user template, might relate to their potential to leak sensitive personal information. If, for example, a system with the lowest authentication error rates comes with the added baggage of leaking a significantly higher amount of private information, then such a system might, in practice, not be as useful as its low error rates suggest. Users would only accept, and get the full utility of the system, if the potential privacy breaches associated with the system are well understood and appropriate mitigations undertaken. But, Serwadda said, while the EEG is still being studied, the next wave of invention is already beginning. “In light of the privacy challenges seen with the EEG, it is noteworthy that the next wave of technology after the EEG is already being developed,” Serwadda said. “One of those technologies is functional near-infrared spectroscopy (fNIRS), which has a much higher signal-to-noise ratio than an EEG. It gives a more accurate picture of brain activity given its ability to focus on a particular region of the brain.” The good news, for now, is fNIRS technology is still quite expensive; however there is every likelihood that the prices will drop over time, potentially leading to a civilian application to this technology. Thanks to the efforts of researchers like Serwadda, minimizing the leakage of sensitive personal information through these technologies is beginning to gain attention in the research community. “The basic idea behind this research is to motivate a direction of research which selects design parameters in such a way that we not only care about recognizing users very accurately but also care about minimizing the amount of sensitive personal information it can read,” Serwadda said. via
  11. classictrek: letterheady: In 1967, just a few months after Star Trek debuted on television, the show’s creator - Gene Roddenberry - used this very letterhead for all business correspondence. Gene Roddenberry, 1967 | Submitted by Dale Macy via
  12. via
  13. todropscience: DOLLOCARIS, A PREDATOR OF THE JURASSIC WAS ALL EYES. Vision has revolutionized the way animals explore their environment and interact with each other and rapidly became a major driving force in animal evolution. However, direct evidence of how ancient animals could perceive their environment is extremely difficult to obtain because internal eye structures are almost never fossilized. Today, paleontologists reconstruct with unprecedented resolution the three-dimensional structure of the huge compound eye of a 160-million-year-old thylacocephalan arthropod from the La Voulte exceptional fossil biota in South East France. The study was published 19 January in Nature Communications. This arthropod called Dollocaris ingens had about 18,000 lenses on each eye, which is a record among extinct and extant arthropods and is surpassed only by modern dragonflies. Combined information about its eyes, internal organs and gut contents obtained by X-ray microtomography lead to the conclusion that this thylacocephalan arthropod was a visual hunter probably adapted to illuminated environments, thus contradicting the hypothesis that La Voulte was a deep-water environment. - Eye structure of Dollocaris ingens. As a group, the Thylacocephala survived to the Upper Cretaceous. Beyond this, there remains much uncertainty concerning fundamental aspects of the thylacocephalan anatomy, mode of life, and relationship to the Crustacea, with whom they have always been cautiously aligned. Reference (Open access): Vannier et al.2015 Exceptional preservation of eye structure in arthropod visual predators from the Middle Jurassic. Nature Communications via