All about New Scientific, Concept and Futuristic Technologies Thread

Discussion in 'Off Topic' started by Dr. AMK, Feb 2, 2018.

  1. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,717
    Messages:
    1,255
    Likes Received:
    2,695
    Trophy Points:
    181
    DARPA-funded prosthetic memory system successful in humans, study finds
    Coded electrical signal reinforces memories in patients, supporting pioneering research at USC and Wake Forest Baptist Medical Center
    April 3, 2018
    [​IMG]
    Hippocampal prosthesis restores memory functions by creating “MIMO” model-based electrical stimulation of the hippocampus — bypassing a damaged brain region (red X). (credit: USC)

    Scientists at Wake Forest Baptist Medical Center and the University of Southern California (USC) Viterbi School of Engineering have demonstrated a neural prosthetic system that can improve a memory by “writing” information “codes” (based on a patient’s specific memory patterns) into the hippocampus of human subjects via an electrode implanted in the hippocampus (a part of the brain involved in making new memories).

    In this pilot study, described in a paper published in Journal of Neural Engineering, epilepsy patients’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements, as shown in this video. The research, funded by the U.S. Defense Advanced Research Projects Agency (DARPA), offers evidence supporting pioneering research by USC scientist Theodore Berger, Ph.D. (a co-author of the paper), on an electronic system for restoring memory in rats (reported on KurzweilAI in 2011).

    “This is the first time scientists have been able to identify a patient’s own brain-cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better — an important first step in potentially restoring memory loss,” said the paper’s lead author Robert Hampson, Ph.D., professor of physiology/pharmacology and neurology at Wake Forest Baptist.

    The study focused on improving episodic memory (information that is new and useful for a short period of time, such as where you parked your car on any given day) — the most common type of memory loss in people with Alzheimer’s disease, stroke, and head injury.

    The researchers enrolled epilepsy patients at Wake Forest Baptist who were participating in a diagnostic brain-mapping procedure that used surgically implanted electrodes placed in various parts of the brain to pinpoint the origin of the patients’ seizures.

    Reinforcing memories

    [​IMG]
    (LEFT) In one test*, the researchers recorded (“Actual”) the neural patterns or “codes” between two of three main areas of the hippocampus, known as CA3 and CA1, while the eight study participants were performing a computerized memory task. The patients were shown a simple image, such as a color block, and after a brief delay where the screen was blanked, were then asked to identify the initial image out of four or five on the screen. The USC team, led by biomedical engineers Theodore Berger, Ph.D., and Dong Song, Ph.D., analyzed the recordings from the correct responses and synthesized a code (RIGHT) for correct memory performance, based on a multi-input multi-output (MIMO) nonlinear mathematical model. The Wake Forest Baptist team played back that code to the patients (“Predicted” signal) while the patients performed the image-recall task. In this test, the patients’ episodic memory performance then showed a 37 percent improvement over baseline. (credit: USC)

    “We showed that we could tap into a patient’s own memory content, reinforce it, and feed it back to the patient,” Hampson said. “Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.

    “To date we’ve been trying to determine whether we can improve the memory skill people still have. In the future, we hope to be able to help people hold onto specific memories, such as where they live or what their grandkids look like, when their overall memory begins to fail.”

    The current study is built on more than 20 years of preclinical research on memory codes led by Sam Deadwyler, Ph.D., professor of physiology and pharmacology at Wake Forest Baptist, along with Hampson, Berger, and Song. The preclinical work applied the same type of stimulation to restore and facilitate memory in animal models using the MIMO system, which was developed at USC.

    * In a second test, participants were shown a highly distinctive photographic image, followed by a short delay, and asked to identify the first photo out of four or five others on the screen. The memory trials were repeated with different images while the neural patterns were recorded during the testing process to identify and deliver correct-answer codes. After another longer delay, Hampson’s team showed the participants sets of three pictures at a time with both an original and new photos included in the sets, and asked the patients to identify the original photos, which had been seen up to 75 minutes earlier. When stimulated with the correct-answer codes, study participants showed a 35 percent improvement in memory over baseline.

    Abstract of Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall
    Objective. We demonstrate here the first successful implementation in humans of a proof-of-concept system for restoring and improving memory function via facilitation of memory encoding using the patient’s own hippocampal spatiotemporal neural codes for memory. Memory in humans is subject to disruption by drugs, disease and brain injury, yet previous attempts to restore or rescue memory function in humans typically involved only nonspecific, modulation of brain areas and neural systems related to memory retrieval. Approach. We have constructed a model of processes by which the hippocampus encodes memory items via spatiotemporal firing of neural ensembles that underlie the successful encoding of short-term memory. A nonlinear multi-input, multi-output (MIMO) model of hippocampal CA3 and CA1 neural firing is computed that predicts activation patterns of CA1 neurons during the encoding (sample) phase of a delayed match-to-sample (DMS) human short-term memory task. Main results. MIMO model-derived electrical stimulation delivered to the same CA1 locations during the sample phase of DMS trials facilitated short-term/working memory by 37% during the task. Longer term memory retention was also tested in the same human subjects with a delayed recognition (DR) task that utilized images from the DMS task, along with images that were not from the task. Across the subjects, the stimulated trials exhibited significant improvement (35%) in both short-term and long-term retention of visual information. Significance. These results demonstrate the facilitation of memory encoding which is an important feature for the construction of an implantable neural prosthetic to improve human memory.
     
  2. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,882
    Messages:
    17,039
    Likes Received:
    20,936
    Trophy Points:
    931
    Batteries Come in All Shapes & Sizes, But Each One Sucks in Its Own Way (Part 1 of 3)
    Published on Apr 5, 2018
    Batteries have been around for 2000 years, but still have a lot of room for improvement. Why aren’t modern batteries better?


    We’re Running Out of Lithium to Make Batteries, And We Ain’t Li-Ion(Part 2 of 3)
    Published on Apr 12, 2018
    We sat down with a battery expert and asked him everything you want to know.


    Part 3 of 3 - Coming next Thursday!
     
    Dr. AMK likes this.
  3. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,882
    Messages:
    17,039
    Likes Received:
    20,936
    Trophy Points:
    931
    Why Bitcoin is Not Cash - Computerphile

    Published on Apr 10, 2018
    "Bitcoin shouldn't be regulated because it works like cash." Professor Ross Anderson of University of Cambridge on why Bitcoin isn't cash.
    Tracing Stolen Bitcoin: https://youtu.be/UlLN0QERWBs
    Atomic Processing: https://youtu.be/Kmt14S7yR7w

    https://www.facebook.com/computerphile
    https://twitter.com/computer_phile

    This video was filmed and edited by Sean Riley.

    Computer Science at the University of Nottingham: https://bit.ly/nottscomputer

    Computerphile is a sister project to Brady Haran's Numberphile. More at http://www.bradyharan.com
    Stolen Bitcoin Tracing - Computerphile

    Published on Mar 23, 2018
    When bitcoin is spent, remainders are re-encoded & combined - how do you separate out any ill-gotten gains from the legitimate hard-earned lucre? Outlining his team's solution: Professor Ross Anderson of the Computer Laboratory, University of Cambridge.
    Why Bitcoin isn't cash: Coming Soon
    Elliptic Curve Backdoor: https://youtu.be/nybVFJVXbww

    https://www.facebook.com/computerphile
    https://twitter.com/computer_phile

    This video was filmed and edited by Sean Riley.

    Computer Science at the University of Nottingham: https://bit.ly/nottscomputer

    Computerphile is a sister project to Brady Haran's Numberphile. More at http://www.bradyharan.com
     
    Dr. AMK likes this.
  4. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,717
    Messages:
    1,255
    Likes Received:
    2,695
    Trophy Points:
    181
    Hyperloop announced the construction of a kilometer-long test track near its R & D center in France.
     
    hmscott likes this.
  5. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,882
    Messages:
    17,039
    Likes Received:
    20,936
    Trophy Points:
    931
    New Graphene Discovery Could Finally Punch the Gas Pedal, Drive Faster CPUs
    By Joel Hruska on April 17, 2018 at 2:30 pm
    https://www.extremetech.com/computi...finally-punch-the-gas-pedal-drive-faster-cpus

    "Few substances have excited the computer industry as much as graphene. Few substances have proven as maddening and difficult to work with as graphene. The collision of these two facts is why, 14 years after Andre Geim and Konstantin Novoselov characterized and isolated the substance, we’re still waiting to see it used insemiconductor manufacturing. A recent breakthrough, however, could finally change that.

    There are, broadly speaking, two major problems with graphene. The first problem is the difficulty in producing it at scale. The second is its electrical conductivity. The latter might seem like an odd problem, given that graphene’s phenomenal electrical properties are the reason semiconductor manufacturers are interested in it in the first place. But graphene’s unique capabilities also make it difficult to stop the material from conducting electricity. Silicon has a band gap — an energy range where it doesn’t conduct electricity. Graphene, in its pure form, does not. While a handful of methods of producing a band gap in graphene have been found, none of them have been suitable for mass production. That may finally change, thanks to a team from the Catalan Institute of Nanoscience and Nanotechnology (ICN2), who have found a way to create a graphene bandgap that’s identical to silicon’s.

    “What we show in our work is that it is possible to fabricate a graphene ‘like’ material, but with a gap that is very close to the one of silicon, Aitor Mugarza, a research professor and group leader at ICN2, told IEEE Spectrum. “In addition, by simply modifying the width of the graphene strips between the pores (the number of carbon atoms), this band gap can be controlled. The fabrication method is relatively simple and can be extended to wafer-scale growth.”

    One key component of the work is that the advances were driven by bottom-up construction rather than top-down methods. It’s difficult to hit the nanometer scales of modern semiconductor manufacturing with the top-down method, but building the structure from the bottom is much easier to scale to mass manufacturing, according to Mugarza. The research team claims their technique scales to the atomic scale, with lateral dimensions “on the order of 1nm.” A video of the construction process is embedded below.

    Nanoporous graphene membranes for smart filters and sensors


    So, with this discovery, are we all set for graphene production? Not exactly. There’s still a long road between the present day and any chance of seeing graphene transistors. There are major questions around substrates, contacts, and mass manufacturing. If Intel, Samsung, TSMC, and GlobalFoundries can’t build graphene in sufficient quantities, we’ll never see it adopted for anything but the most esoteric projects.

    But at the same time, this genuinely does seem to be a major step forward for mass graphene production and integration into logic. Without the ability to create an effective band gap, we were never going to see graphene in transistors at all. And of all the methods of creating a band gap we’ve discovered, the bottom-up method seems to have the best chance of working at scale and creating the desired characteristics. That qualifies as a genuine breakthrough in our book, even if it’s not large enough to clear the runway entirely and prompt immediate commercialization.
    Now read: What is graphene? "
     
    Dr. AMK likes this.
  6. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,717
    Messages:
    1,255
    Likes Received:
    2,695
    Trophy Points:
    181
    Intel cancels its smart glasses due to lack of investment
    The Vaunt would have been the stealthiest smart glasses to date.
    [​IMG]
    When Intel showed off its Vaunt smart glasses (aka "Superlight" internally) back in February, we had high hopes for a new wave of wearable tech that wouldn't turn us into Borgs. Alas, according to The Information's source, word has it that the chip maker is closing the group responsible for wearable devices which, sadly, included the Vaunt. This was later confirmed by Intel in a statement, which hinted at a lack of investment due to "market dynamics." Indeed, Bloomberg had earlier reported that Intel was looking to sell a majority stake in this division, which had about 200 employees and was valued at $350 million.
    To avoid the awkwardness that doomed the Google Glass, Intel took the subtle approach by cramming a retinal laser projector -- along with all the other electronic bits, somehow -- into the Vaunt's ordinary-looking spectacle frame; plus there was no camera on it. The low-power projector would beam a red, monochrome 400 x 150 pixel image into the lower right corner of one's visual field, thus eliminating the need of a protruding display medium.
    The Verge added that the projection was designed to be non-intrusive, such that it was only visible if you glanced in that direction. Of course, this would limit the amount of detail that could be shown to the user, but it could still deliver basic notifications, text messages and navigation info.
    It's unclear how Intel's withdrawal from the smart glasses market will affect the industry as a whole, but it does mean we're still some time away from seeing something just as impressively stealthy. Meanwhile, other tech giants like Amazon and Apple are still working hard on their own take on smart glasses, so here's hoping these will be worth the wait.
    The following is the full Intel statement sent to Engadget:
    "Intel is continuously working on new technologies and experiences. Not all of these develop into a product we choose to take to market. The Superlight project is a great example where Intel developed truly differentiated, consumer augmented reality glasses. We are going to take a disciplined approach as we keep inventing and exploring new technologies, which will sometimes require tough choices when market dynamics don't support further investment."
     
  7. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,882
    Messages:
    17,039
    Likes Received:
    20,936
    Trophy Points:
    931
    That's too bad, that photo of the glasses on the workbench, the out of focus Sun Glasses - are like what I wear, I could have worn them without standing out over a change in glasses.

    Too bad about no camera too, that's really needed for most useful functionality, like looking at a bar code, qcode, image recognition of a product I want to look up, sharing video of my broken down car with the repair shop, and all the niceties of computing and display on the glasses that go with it.

    I'm actually hoping for wireless Contact Lenses, as that would be what I could switch to indoors without any changes to my appearance.

    Hopefully they would glow Blue, Red, or Greenish Orange to "alert" people I have "eyes" on them. :)
    eye-winkjpg.jpg Not Creepy at all...
     
    Last edited: Apr 19, 2018
    Dr. AMK likes this.
  8. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,882
    Messages:
    17,039
    Likes Received:
    20,936
    Trophy Points:
    931
    FDA News Release
    FDA clears first contact lens with light-adaptive technology
    For Immediate Release

    April 10, 2018
    https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm604263.htm

    "The U.S. Food and Drug Administration today cleared the first contact lens to incorporate an additive that automatically darkens the lens when exposed to bright light.

    The Acuvue Oasys Contact Lenses with Transitions Light Intelligent Technology are soft contact lenses indicated for daily use to correct the vision of people with non-diseased eyes who are nearsighted (myopia) or farsighted (hyperopia). They can be used by people with certain degrees of astigmatism, an abnormal curvature of the eye.

    The National Eye Institute at the National Institutes of Health estimates that 42 percent of Americans aged 12 to 54 have myopia and 5 to 10 percent of all Americans have hyperopia. The Centers for Disease Control and Prevention estimates that as of 2014, more than 40 million Americans were contact lens wearers.

    “This contact lens is the first of its kind to incorporate the same technology that is used in eyeglasses that automatically darken in the sun,” said Malvina Eydelman, director of the Division of Ophthalmic, and Ear, Nose and Throat Devices at the FDA's Center for Devices and Radiological Health.

    The contact lenses contain a photochromic additive that adapts the amount of visible light filtered to the eye based on the amount of UV light to which they are exposed. This results in slightly darkened lenses in bright sunlight that automatically return to a regular tint when exposed to normal or dark lighting conditions.

    For today’s clearance, the FDA reviewed scientific evidence including a clinical study of 24 patients that evaluated daytime and nighttime driving performance while wearing the contact lenses. The results of the study demonstrated there was no evidence of concerns with either driving performance or vision while wearing the lenses.

    Patients with the following conditions should not use these contact lenses: inflammation or infection in or around the eye or eyelids; any eye disease, injury or abnormality that affects the cornea, conjunctiva (the mucous membrane that covers the front of the eye and lines the inside of the eyelids) or eyelids; any previously diagnosed condition that makes contact lens wear uncomfortable; severe dry eye; reduced corneal sensitivity; any systemic disease that may affect the eye or be made worse by wearing contact lenses; allergic reactions on the surface of the eye or surrounding tissues that may be induced or made worse by wearing contact lenses or use of contact lens solutions; any active eye infection or red or irritated eyes.

    These contacts are intended for daily wear for up to 14 days. Patients should not sleep in these contact lenses, expose them to water or wear them longer than directed by an eye care professional. These contacts should not be used as substitutes for UV protective eyewear.

    The Acuvue Oasys Contact Lenses with Transitions Light Intelligent Technology were reviewed through the premarket notification 510(k) pathway. A 510(k) is a premarket submission made by device manufacturers to the FDA to demonstrate that the new device is substantially equivalent to a legally marketed predicate device.

    The FDA granted clearance of the Acuvue Oasys Contact Lenses with Transitions Light Intelligent Technology to Johnson & Johnson Vision Care, Inc."

    They don't mention if they have them in Lizard Pupil Greenish Orange yet... :)
     
    Dr. AMK likes this.
  9. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,717
    Messages:
    1,255
    Likes Received:
    2,695
    Trophy Points:
    181
    How deep learning is about to transform biomedical science
    “In silico labeling” aims to decode the terabytes of data per day generated in bio research labs
    April 18, 2018
    [​IMG]
    Human induced pluripotent stem cell neurons imaged in phase contrast (gray pixels, left) — currently processed manually with fluorescent labels (color pixels) to make them visible. That’s about to radically change. (credit: Google)

    Researchers at Google, Harvard University, and Gladstone Institutes have developed and tested new deep-learning algorithms that can identify details in terabytes of bioimages, replacing slow, less-accurate manual labeling methods.

    Deep learning is a type of machine learning that can analyze data, recognize patterns, and make predictions. A new deep-learning approach to biological images, which the researchers call “in silico labeling” (ISL), can automatically find and predict features in images of “unlabeled” cells (cells that have not been manually identified by using fluorescent chemicals).
    The new deep-learning network can identify whether a cell is alive or dead, and get the answer right 98 percent of the time (humans can typically only identify a dead cell with 80 percent accuracy) — without requiring invasive fluorescent chemicals, which make it difficult to track tissues over time. The deep-learning network can also predict detailed features such as nuclei and cell type (such as neural or breast cancer tissue).

    The deep-learning algorithms are expected to make it possible to handle the enormous 3–5 terabytes of data per day generated by Gladstone Institutes’ fully automated robotic microscope, which can track individual cells for up to several months.

    The research was published in the April 12, 2018 issue of the journal Cell.

    How to train a deep-learning neural network to predict the identity of cell features in microscope images

    [​IMG]

    Using fluorescent labels with unlabeled images to train a deep neural network to bring out image detail. (Left) An unlabeled phase-contrast microscope transmitted-light image of rat cortex — the center image from the z-stack (vertical stack) of unlabeled images. (Right three images) Labeled images created with three different fluorescent labels, revealing invisible details of cell nuclei (blue), dendrites (green), and axons (red). The numbered outsets at the bottom show magnified views of marked subregions of images. (credit: Finkbeiner Lab)

    To explore the new deep-learning approach, Steven Finkbeiner, MD, PhD, the director of the Center for Systems and Therapeutics at Gladstone Institutes in San Francisco, teamed up with computer scientists at Google.

    “We trained the [deep learning] neural network by showing it two sets of matching images of the same cells: one unlabeled [such as the black and white "phase contrast"microscope image shown in the illustration] and one with fluorescent labels [such as the three colored images shown above],” explained Eric Christiansen, a software engineer at Google Accelerated Science and the study’s first author. “We repeated this process millions of times. Then, when we presented the network with an unlabeled image it had never seen, it could accurately predict where the fluorescent labels belong.” (Fluorescent labels are created by adding chemicals to tissue samples to help visualize details.)

    The study used three cell types: human motor neurons derived from induced pluripotent stem cells, rat cortical cultures, and human breast cancer cells. For instance, the deep-learning neural network can identify a physical neuron within a mix of cells in a dish. It can go one step further and predict whether an extension of that neuron is an axon or dendrite (two different but similar-looking elements of the neural cell).

    For this study, Google used TensorFlow, an open-source machine learning framework for deep learning originally developed by Google AI engineers. The code for this study, which is open-source on Github, is the result of a collaboration between Google Accelerated Science and two external labs: the Lee Rubin lab at Harvard and the Steven Finkbeiner lab at Gladstone.

    [​IMG]
    Animation showing the same cells in transmitted light (black and white) and fluorescence (colored) imaging, along with predicted fluorescence labels from the in silico labeling model. Outset 2 shows the model predicts the correct labels despite the artifact in the transmitted-light input image. Outset 3 shows the model infers these processes are axons, possibly because of their distance from the nearest cells. Outset 4 shows the model sees the hard-to-see cell at the top, and correctly identifies the object at the left as DNA-free cell debris. (credit: Google)

    Transforming biomedical research

    “This is going to be transformative,” said Finkbeiner, who is also a professor of neurology and physiology at UC San Francisco. “Deep learning is going to fundamentally change the way we conduct biomedical science in the future, not only by accelerating discovery, but also by helping find treatments to address major unmet medical needs.”

    In his laboratory, Finkbeiner is trying to find new ways to diagnose and treat neurodegenerative disorders, such as Alzheimer’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis (ALS). “We still don’t understand the exact cause of the disease for 90 percent of these patients,” said Finkbeiner. “What’s more, we don’t even know if all patients have the same cause, or if we could classify the diseases into different types. Deep learning tools could help us find answers to these questions, which have huge implications on everything from how we study the disease to the way we conduct clinical trials.”

    Without knowing the classifications of a disease, a drug could be tested on the wrong group of patients and seem ineffective, when it could actually work for different patients. With induced pluripotent stem cell technology, scientists could match patients’ own cells with their clinical information, and the deep network could find relationships between the two datasets to predict connections. This could help identify a subgroup of patients with similar cell features and match them to the appropriate therapy, Finkbeiner suggests.

    The research was funded by Google, the National Institute of Neurological Disorders and Stroke of the National Institutes of Health, the Taube/Koret Center for Neurodegenerative Disease Research at Gladstone, the ALS Association’s Neuro Collaborative, and The Michael J. Fox Foundation for Parkinson’s Research.

    Abstract of In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images
    Microscopy is a central method in life sciences. Many popular methods, such as antibody labeling, are used to add physical fluorescent labels to specific cellular constituents. However, these approaches have significant drawbacks, including inconsistency; limitations in the number of simultaneous labels because of spectral overlap; and necessary pertur-bations of the experiment, such as fixing the cells, to generate the measurement. Here, we show that a computational machine-learning approach, which we call ‘‘in silico labeling’’ (ISL), reliably predicts some fluorescent labels from transmitted-light images of unlabeled fixed or live biological samples. ISL predicts a range of labels, such as those for nuclei, cell type (e.g., neural), and cell state (e.g., cell death). Because prediction happens in silico, the method is consistent, is not limited by spectral overlap, and does not disturb the experiment. ISL generates biological measurements that would otherwise be problematic or impossible to acquire.

    references:
     
  10. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,882
    Messages:
    17,039
    Likes Received:
    20,936
    Trophy Points:
    931
    We Tried To Steal Food From A Delivery Robot
    Published on Apr 3, 2017
    Robots are the future of food delivery and the temptation to steal from them is real.


    Humans, always mucking things up; you thought of a good thing to help mankind, and they put just as much thought into trying to turn it into garbage. :D
     
    Dr. AMK likes this.
Loading...
Similar Threads - Scientific Concept Futuristic
  1. kenny1999
    Replies:
    10
    Views:
    472

Share This Page