All about New Scientific, Concept and Futuristic Technologies Thread

Discussion in 'Off Topic' started by Dr. AMK, Feb 2, 2018.

  1. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
     
    hmscott likes this.
  2. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
    Cosmic Bell and its integrated program is an innovative experiment in public engagement with complex science topics. The exhibition explores the Cosmic Bell Experiment, an international, research project led by MIT physicists David Kaiser, Alan Guth, and Andrew Friedman. The experiment tackles fundamental concepts of physics as it attempts to close the last loophole in our understanding of quantum mechanics. The pieces in the exhibition were developed in a workshop at the MIT Museum’s Compton Studio, where MIT students collaborated with the leading researchers on how to make quantum experiments and observations real and visceral. The accompanying play, Both/And engages audiences with the underlying principles of entanglement through the medium of theater.
     
    hmscott likes this.
  3. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
    Building Large-scale Production Systems for Latency-sensitive Applications
    https://www.microsoft.com/en-us/res...n-systems-for-latency-sensitive-applications/
    February 6, 2018
    Shadi Noghabi
    University of Illinois at Urbana–Champaign


    Overview

    In today’s era of increased engagement with technology, the myriad interactive and latency-sensitive applications around us necessitate handling large-scale data quickly and efficiently. This talk focuses on designing and developing production-quality systems with particular attention to improving end-to-end latency and building massive-scale solutions. At large scale, providing low latency becomes increasingly challenging, with many issues around distribution of data and computation, providing load balance, handling failures, and continuous scaling. We explore these issues on a wide range of systems, from a large-scale geo-distributed blob storage system that is running in production serving 450 million users (Ambry), to a stateful stream processing system handling 100s of TBs for a single job (Samza), and a real-time edge computing framework transparently running jobs in an edge-cloud environment (Fluid-Edge).
     
    hmscott likes this.
  4. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
     
    hmscott likes this.
  5. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
    Do our brains use the same kind of deep-learning algorithms used in AI?
    Bridging the gap between neuroscience and AI
    http://www.kurzweilai.net/do-our-brains-use-the-same-kind-of-deep-learning-algorithms-used-in-ai
    February 23, 2018
    [​IMG]
    This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex, the most prevalent cell type in the cortex. The tree-like form separates “roots,” where bottoms of cortical neurons are located just where they need to be to receive signals about sensory input, from “branches” at the top, which are well placed to receive feedback error signals. Right: Illustration of simplified pyramidal neuron models. (credit: CIFAR)

    Deep-learning researchers have found that certain neurons in the brain have shape and electrical properties that appear to be well-suited for “deep learning” — the kind of machine-intelligence used in beating humans at Go and Chess.

    Canadian Institute For Advanced Research (CIFAR) Fellow Blake Richards and his colleagues — Jordan Guerguiev at the University of Toronto, Scarborough, and Timothy Lillicrap at Google DeepMind — developed an algorithm that simulates how a deep-learning network could work in our brains. It represents a biologically realistic way by which real brains could do deep learning.*

    The finding is detailed in a study published December 5th in the open-access journal eLife. (The paper is highly technical; Adam Shai of Stanford University and Matthew E. Larkum of Humboldt University, Germany wrote a more accessible paper summarizing the ideas, published in the same eLife issue.)

    Seeing the trees and the forest

    [​IMG]
    Image of a neuron recorded in Blake Richard’s lab (credit: Blake Richards)

    “Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.” That allows these functions to have the required separation.

    Using this knowledge of the neurons’ structure, the researchers built a computer model using the same shapes, with received signals in specific sections. It turns out that these sections allowed simulated neurons in different layers to collaborate — achieving deep learning.

    “It’s just a set of simulations so it can’t tell us exactly what our brains are doing, but it does suggest enough to warrant further experimental examination if our own brains may use the same sort of algorithms that they use in AI,” Richards says.

    “No one has tested our predictions yet,” he told KurzweilAI. “But, there’s a new preprint that builds on what we were proposing in a nice way from Walter Senn‘s group, and which includes some results on unsupervised learning (Yoshua [Bengio] mentions this work in his talk).

    How the brain achieves deep learning

    The tree-like pyramidal neocortex neurons are only one of many types of cells in the brain. Richards says future research should model different brain cells and examine how they interact together to achieve deep learning. In the long term, he hopes researchers can overcome major challenges, such as how to learn through experience without receiving feedback or to solve the “credit assignment problem.”**

    Deep learning has brought about machines that can “see” the world more like humans can, and recognize language. But does the brain actually learn this way? The answer has the potential to create more powerful artificial intelligence and unlock the mysteries of human intelligence, he believes.

    “What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” Richards says.

    Perhaps this kind of research could one day also address future ethical and other human-machine-collaboration issues — including merger, as Elon Musk and Ray Kurzweil have proposed, to achieve a “soft takeoff” in the emergence of superintelligence.

    * This research idea goes back to AI pioneers Geoffrey Hinton, a CIFAR Distinguished Fellow and founder of the Learning in Machines & Brains program, and program Co-Director Yoshua Bengio, who was one of the main motivations for founding the program. These researchers sought not only to develop artificial intelligence, but also to understand how the human brain learns, says Richards.

    In the early 2000s, Richards and Lillicrap took a course with Hinton at the University of Toronto and were convinced deep learning models were capturing “something real” about how human brains work. At the time, there were several challenges to testing that idea. Firstly, it wasn’t clear that deep learning could achieve human-level skill. Secondly, the algorithms violated biological facts proven by neuroscientists.

    The paper builds on research from Bengio’s lab on a more biologically plausible way to train neural nets and an algorithm developed by Lillicrap that further relaxes some of the rules for training neural nets. The paper also incorporates research from Matthew Larkam on the structure of neurons in the neocortex.

    By combining neurological insights with existing algorithms, Richards’ team was able to create a better and more realistic algorithm for simulating learning in the brain.

    The study was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), a Google Faculty Research Award, and CIFAR.

    ** In the paper, the authors note that a large gap exists between deep learning in AI and our current understanding of learning and memory in neuroscience. “In particular, unlike deep learning researchers, neuroscientists do not yet have a solution to the ‘credit assignment problem’ (Rumelhart et al., 1986; Lillicrap et al., 2016; Bengio et al., 2015). Learning to optimize some behavioral or cognitive function requires a method for assigning ‘credit’ (or ‘blame’) to neurons for their contribution to the final behavioral output (LeCun et al., 2015; Bengio et al., 2015). The credit assignment problem refers to the fact that assigning credit in multi-layer networks is difficult, since the behavioral impact of neurons in early layers of a network depends on the downstream synaptic connections.” The authors go on to suggest a solution.
     
    Last edited: Feb 26, 2018
    hmscott likes this.
  6. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
    Scientists Rush To Explore Underwater World Hidden Below Ice For 120,000 Years
    Jeanna Bryner Live Science
    https://www.huffingtonpost.com/entr...-for-120000-years_us_5a8da86ee4b0273053a6fac0
    [​IMG]
    CREDIT: NASA
    Looking out from the sea ice to iceberg A68, around November 2017, just months after the berg calved from Antarctica’s Larsen C Ice Shelf in July.
    A huge, trillion-ton iceberg about the size of Delaware broke free from Antarctica’s Larsen C Ice Shelf in July 2017. As it moved away from its chilly birth mom and into the Weddell Sea, a vast expanse of water saw the light for the first time in up to 120,000 years.
    And this month, a team of scientists will venture to the long-ice-buried expanse to investigate the mysterious ecosystem that was hidden beneath the Antarctic ice shelf for so long.

    The newly exposed seabed stretches across an area of about 2,246 square miles (5,818 square kilometers), according to the British Antarctic Survey (BAS), which is leading the expedition. The scientists consider their journey “urgent,” as they hope to document the system before sunlight begins to change at least the surface layers. [In Photos: Antarctica’s Larsen C Ice Shelf Through Time]

    “The calving of [iceberg] A-68 [from the Larsen C Ice Shelf] provides us with a unique opportunity to study marine life as it responds to a dramatic environmental change. It’s important we get there quickly before the undersea environment changes as sunlight enters the water and new species begin to colonize,” Katrin Linse, of the British Antarctic Survey, said in a statement.
    [​IMG]
    CREDIT: NASA
    The edge of Larsen C Ice Shelf with the western edge of iceberg A68 in the distance.
    This New World
    The current capitalist system is broken. Get updates on our progress toward building a fairer world.
    What lies beneath?

    Scientists know little about the possibly alien-like life that has taken up residence beneath Antarctica’s ice shelf. What they do know comes from similar calving events in the past: Chunks of ice broke off the Larsen A and B shelves (located north of Larsen C on the Antarctic Peninsula) in 1995 and 2002, respectively. Two German expeditions to those “newly” exposed areas revealed sparse life. However, it took five to 12 years for the expeditions to make it to those areas, and by that time creatures from other areas had made their way to both spots, Live Science previously reported.

    In other icy realms around Antarctica, some bizarre creatures have turned up. For instance, a bristled marine worm that lives in the Southern Ocean, and Live Science previously reported as looking like a “Christmas ornament from hell,” has an extendable throat tipped with pointy teeth. And some creatures have made a living under extreme conditions, including a crustacean called Lyssianasid amphipod, which was found thriving beneath the Ross Ice Shelf in western Antarctica. One of the more famous Antarctic animals, the icefish has natural antifreeze in its blood and body fluids, allowing it to survive the frigid temperatures of Earth’s chilly bottom.

    To explore the once-hidden ecosystem, the scientists — hailing from nine research institutes — will set off from the Falkland Islands on Feb. 21. They plan to spend three weeks aboard the BAS research ship, the RRS James Clark Ross. To navigate the ice-filled water to the remote location, the ship will rely on satellite data, according to the BAS.

    Once they arrive, the team plans to collect samples of life (seafloor animals, microbes, plankton and any other inhabitants) as well as sediments and water.
     
    Last edited: Feb 26, 2018
    hmscott likes this.
  7. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
     
    hmscott likes this.
  8. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
    Vodafone and Nokia have partnered to create a Luna-based communications network using 4G LTE.
    The German firm of new spaces PTScientists has been planning a mission to the Moon for many years. It has partnered with Audi to produce and deliver two quartro rovers awarded XPrize to the Moon that will explore the lunar surface and return carefully to the landing site of Apollo 17 in 2019. Now the team has partnered with Vodafone and Nokia to create a Luna based on the communications network using 4G LTE to bring the high definition video of the moon to those of us here on Earth. The base station of Vodafone will communicate with the explorers while they collect images and videos of the different lunar views. The 4G network will use the 1800 MHz frequency band to send high-definition video to the autonomous ALINA navigation and landing module, which will then connect with the PTScientists in Berlin. Nokia is manufacturing space-level network equipment that will weigh less than a kilogram in total. 4G is more energy efficient than analog radio, and will allow more data to be transferred between the rovers and the ALINA station. "We are very happy to have been selected by Vodafone to be its technology partner," said Marcus Weldon of Nokia. This important mission supports, among other things, the development of new space technologies for data networks, processing and storage in the future, and will help advance the communications infrastructure required for academics, industry and educational institutions in the region. conducting lunar research, have potentially very broad implications for many stakeholders and humanity in general, and we look forward to working closely with Vodafone and the other partners in the coming months, before the launch in 2019.


    Nokia and Vodafone will bring 4G to the Moon
    https://www.engadget.com/2018/02/27/nokia-vodafone-4g-moon/
    [​IMG]
     
    Last edited: Feb 28, 2018
    hmscott likes this.
  9. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
    Machine learning could improve how doctors diagnose heart attacks
    [​IMG]
    You’re working in your house, going about your normal routine when suddenly the pain hits. Your chest starts to throb and your left arm begins to ache. Without hesitation, you rush to the hospital, dreading your worst fear has become a reality — you are having a heart attack. Upon arrival, physicians, nurses, and other medical staff begin frantically testing, probing, and prodding nearly every part of your body. They run more tests than you can keep track of and begin shouting orders for new tests and other members of the team. The emergency physician is carefully watching the monitors hooked up by your bedside, puzzled by the results they are seeing. They turn to consult a cardiologist expert on the signals your heart is emitting. But instead of a person, they turn to a computer.
    Heart attacks and heart attack detection
    Every day in the United States more than 2,000 people have heart attacks. Of these, over 400 people do not receive treatment in time. A heart attack occurs when material clogs the arteries that supply blood to the heart. Without blood, the heart does not have the necessary nutrients to continue functioning normally, and it begins to die. The longer a patient waits, the more likely the attack will cause irreparable damage to the heart. While researchers have made advancements in heart attack detection, the underlying methods remain unchanged from a century ago. Currently, physicians use the same electrocardiograms (ECGs) developed in the early 20th century to monitor the electrical activity of the heart. Depending on the location and severity of the heart attack, certain regions of the ECG may change. However, these changes are small, unreliable, and only include a small portion of the entire electrical signal of the heart.

    Researchers have applied different signal processing and other complicated mathematical manipulations to ECGs. These processing steps have been unable to compensate for the differences in each heart in each person.

    Just like your fingerprint, your heart has a slightly different shape, a different beating force, and thus, a different resting ECG signal than any other. Not to mention, the space between the heart and the recording device on the body’s surface can vary greatly with weight, gender, and overall body type. These variations make it very difficult for automated systems to predict what your specific heart is going through at any given moment. This necessitates a new system that can adapt based on your unique heart shape and signal to detect whether or not you are having a heart attack.

    [​IMG]
    Above: The colors represent the ECG voltage values distributed across the body surface at a single point in time. LEFT: Healthy body surface voltages. RIGHT: Body surface voltages created by the heart in the time leading up to a heart attack. Note: The red circle on the torso surface in the right image corresponds to the diagnostic marker clinicians typically look for.

    To improve electrocardiogram measurement techniques, our team utilized recent developments in computer science to “teach” computers to read cardiac electrical signals. With the incorporation of machine learning, electrocardiograms can tell us more than ever before about your heart.

    How machine learning works
    Machine learning is a technique developed by researchers to teach computers to identify unique features in datasets that are not easily distinguishable by the naked eye. Researchers give the computer multiple sets of categorized data with different features. The computer then “learns” which features within the dataset separate it into various categories. These features detected by the computer are often subtle and complex and may not be distinguishable by human observers. Once the computer has learned which features correspond to different categories, it can apply that knowledge to determine the category a new dataset belongs to.

    How we use machine learning
    The Scientific Computing and Imaging (SCI) Institute at the University of Utah is a world leader in biomedical computing and visualization. The SCI Institute has 17 full-time faculty members and around 200 students, programmers, and staff, some of whom also belong to the departments of bioengineering, computing, mathematics, and physics. The overarching research objective of the SCI Institute is to create new scientific computing techniques, tools, and systems that enable solutions to important problems in biomedicine, science, and engineering. The SCI Institute, which was named an Nvidia GPU Center of Excellence, seeks to use the power and versatility of modern computing to drive progress in a variety of fields.

    We have used machine learning to detect changes in the cardiac signal that indicate the first signs of a heart attack. Our approach isolates the electrical signals from the heart and examines changes before, during, and after simulated heart attacks. The computer then reads these signals and categorizes the data. The two categories the computer isolates are “having a heart attack” and “not having a heart attack.” Compared to traditional human observers, the computer can determine the onset of a heart attack 10 percent faster. The computer was also 32 percent more accurate at detecting the early signs of a heart attack. Each additional episode detected by the machine learning algorithm is a potentially averted misdiagnosis.


    The future of heart attack detection
    Using machine learning to help physicians detect heart attacks is advancing the field of cardiology. Physicians and health care workers will soon have a better tool to detect and treat you during one of life’s most dire circumstances. This tool could even protect people who are at risk of heart attacks because of genetic predispositions or environmental factors. This research could contribute to new ways of understanding and detecting heart attacks, perhaps even making death from heart attacks a thing of the past.

    If you ever have to visit your doctor for chest pain, check who their partner is — it might just be a computer.
     
  10. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,724
    Messages:
    1,274
    Likes Received:
    2,707
    Trophy Points:
    181
    [​IMG]
    Reducing the Complexity of IoT Development
    https://newsroom.intel.com/editorials/reducing-complexity-iot-development/
    Intel Extensively Revamps Its Internet of Things Roadmap to Benefit Developers and Integrators
    We have one message for IoT developers and integrators at this year’s Embedded World 2018: We are doing everything we can to help you prototype and develop your ideas faster and get your solutions to market sooner.

    The Intel Internet of Things Group has been talking to a lot of developers and partners in the IoT ecosystem – and listening carefully. The result is we’ve completely revamped our roadmaps for developer tools and IoT solution resources to create a more seamless experience across the ecosystem.

    Here is what we mean:
    New Ingredients

    Intel will showcase Intel® FPGAs and Movidius™ Myriad™ 2 VPUs, components that provide power-efficient acceleration to critical edge applications. Our Exor Smart Factory 4.0 demo models an entire working industrial system using Intel FPGA devices for device control in edge-compatible formats. Myriad 2 gives developers immediate access to its advanced vision processing core, while allowing them to develop proprietary capabilities that provide true differentiation.

    Richer Tools, Kits and SDKs

    We have consolidated our tool offerings to streamline the development path from prototype to production by integrating new features and capabilities into Intel® System Studio 2018. This includes 400 sensors, enhanced debugger workflows, more libraries and code samples, improved data compression, and optimizations for small matrix multiplication. This new tool is a cross-platform system and IoT development tool suite to help simplify system bring-up, boost performance and power efficiency, and strengthen system reliability.

    We worked with Arduino* to streamline commercial application prototypes based on Intel® architecture using Arduino Create* and added Intel-based hardware capabilities into its development suite. For professional capabilities, there is a bridge from Arduino Create directly into Intel System Studio 2018 for a seamless development experience, leveraging advanced functionality provided by Intel’s tools.

    You can also accelerate designs with the UP Squared* Grove* IoT Development Kit. Easy to use and versatile, the kit offers a rapid prototyping platform for applications, including media encode/decode, signal and data processing, and machine learning.

    [​IMG]
    » Click for full image

    Accelerating Development with Intel IoT RFP-Ready Kits

    Request for proposal (RFP)-ready kits bundle hardware, middleware, software, sensors and support. These kits contain the core essentials an integrator needs to prototype quickly. They just require any unique differentiating features.

    RFP-ready kits are focused on specific use cases, like visual retail, smart buildings, security surveillance and remote health care. The Advantech* RFP Kit, a solar panel that demonstrates Advantech’s UTX-3117 gateway combined with Wind River® Helix Device Cloud and Pulsar™ Linux*. This enables next-generation systems to process high volumes of data with low latency, high bandwidth and low power consumption.

    Intel IoT Market-Ready Solutions

    For fast deployments that are end user-ready, there are built-to-scale Market-Ready Solutions (MRS). Through our ecosystem partners, we are providing an extensive portfolio of proven, integrated and already-deployed MRSs. We are showing the new Dell* V5 Solution, a self-powered, portable artificial intelligence security solution that provides ongoing video monitoring and advanced analytics. The Hayward Police Department in California rapidly deployed these units to significantly reduce the number of downtown thefts and other downtown street crimes by over 60 percent.1

    Connect with Intel at Embedded World 2018

    Join us at the Intel booth at Embedded World (#338, Hall 1, Nuremberg Exhibition Center, Nuremberg, Germany) and learn how Intel will help drive innovation for the IoT through new hardware, software and tools for developers. Visit our Intel Developer Zone for IoT or the IoT Solutions Alliance for other resources and to get a jump-start on your next project.

    Jonathan Luse is general manager of IoT Planning and Product Line Management at Intel Corporation.

    1Source: V5 Systems Case Study, City of Hayward
     
Loading...
Similar Threads - Scientific Concept Futuristic
  1. kenny1999
    Replies:
    10
    Views:
    477

Share This Page