All about Blockchain, Cryptocurrency, Digital Transformation

Discussion in 'Off Topic' started by Dr. AMK, Jan 7, 2018.

  1. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,607
    Messages:
    1,177
    Likes Received:
    2,562
    Trophy Points:
    181
    Weaponizing Artificial Intelligence: The Scary Prospect Of AI-Enabled Terrorism
    There has been much speculation about the power and dangers of artificial intelligence (AI), but it’s been primarily focused on what AI will do to our jobs in the very near future. Now, there’s discussion among tech leaders, governments and journalists about how artificial intelligence is making lethal autonomous weapons systems possible and what could transpire if this technology falls into the hands of a rogue state or terrorist organization. Debates on the moral and legal implications of autonomous weapons have begun and there are no easy answers.
    [​IMG]
    Autonomous weapons already developed

    United Nations recently discussed the use of autonomous weapons and the possibility to institute an international ban on “killer robots.” This debate comes on the heels of more than 100 leaders from the artificial intelligence community, including Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman, warning that these weapons could lead to a “third revolution in warfare.”


    Although artificial intelligence has enabled improvements and efficiencies in many sectors of our economy from entertainment to transportation to healthcare, when it comes to weaponized machines being able to function without intervention from humans, a lot of questions are raised.

    There are already a number of weapons systems with varying levels of human involvement that are actively being tested today.

    In the UK, the Taranis drone, an unmanned combat aerial vehicle, is expected to be fully operational by 2030 and capable of replacing the human-piloted Tornado GR4 fighter planes that are part of the Royal Air Force’s Future Offensive Air System.

    Other countries, including the United States and Russia, are developing robotic tanks that can operate autonomously or be remote controlled. The U.S. also has an autonomous warship that was launched in 2016. Although it’s still in development, it’s expected to have offensive capabilities including anti-submarine weaponry.

    South Korea uses a Samsung SGR-A1 sentry gun that is supposedly capable of firing autonomously to police its border.

    While these weapons were developed to minimize the threat to human life in military conflicts, you don’t need to be an avid Sci-Fi fan to make the leap to imagine how terrorist organizations can use these weapons for mass destruction.

    Warnings of AI and killer robots

    The United States and Chinese military are testing the use of swarming drones—dozens of unmanned aircraft that can be sent in to overwhelm enemy targets and can result in mass killings.

    Alvin Wilby, vice president of research at Thales, a French defense giant that supplies reconnaissance drones to the British Army, told the House of Lords Artificial Intelligence Committee that rogue states and terrorists “will get their hands on lethal artificial intelligence in the very near future.” Echoing the same sentiment is Noel Sharkey, emeritus professor of artificial intelligence and robotics at University of Sheffield who fears “very bad copies” of such weapons would get into the hands of terrorist groups.

    Not all agree that AI is all bad; in fact, its potential benefit humanity is immense.

    AI can help fight terrorism

    On the other side of the AI spectrum, Facebook announced that it is using AI to find and remove terrorist content from its platform. Behind the scenes, Facebook uses image-matching technology to identify and prevent photos and videos from known terrorists from popping up on other accounts. The company also suggested it could use machine-learning algorithms to look for patterns in terrorist propaganda, so it could more swiftly to remove it from the newsfeeds of others. These anti-terror efforts would extend to other platforms Facebook owns including WhatsApp and Instagram. Facebook partnered with other tech companies including Twitter, Microsoft and YouTube to create an industry database that documents the digital fingerprints of terrorist organizations.

    Humans pushed out of life and death decisions

    The overwhelming concern from groups who wish to ban lethal autonomous weapons such as the Campaign to Stop Killer Robots, is that if machines become fully autonomous, humans won’t have a deciding role in missions that kill. This creates a moral dilemma. And, what if some evil regimes use lethal autonomous systems on their own people?

    As Mr. Wilby said, the AI “genie is out of the bottle.” As with other innovations, now that AI technology has begun to impact our world, we need to do our best to find ways to properly control it. If terrorist organizations wish to use AI for evil purposes, perhaps our best defense is an AI offense
     
  2. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,607
    Messages:
    1,177
    Likes Received:
    2,562
    Trophy Points:
    181
    Australia: Govpass — your digital identity
    The DTA demonstrates how simple it is to create a Govpass digital identity


     
  3. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,607
    Messages:
    1,177
    Likes Received:
    2,562
    Trophy Points:
    181
    European Commission: “We Need to Invest €20 billion in AI”
    [​IMG]
    EC announces fresh € billions, major AI push
    The European Commission (EC) has announced a major push on Artificial Intelligence (AI) coordinated across member states, including ramped up spending, business partnerships and training schemes.
    “We need to invest at least €20 billion in Artificial Intelligence by the end of 2020” Andrus Ansip the European Commission’s (EC) Vice President for the Digital Single Market said on Wednesday, announcing a further €1.5 billion for AI R&D.

    The statement came as the EC announced plans for a three-pronged approach to AI, spanning increased public and private investment; preparation for socio-economic changes, and development of an “appropriate” ethical and legal framework.

    “Transformational” AI Requires Collaboration
    “AI is transforming our world. It presents new challenges that Europe should meet. The Commission is playing its part: today, we are giving a boost to researchers so that they can develop the next generation of AI technologies and applications, and to companies, so that they can embrace and incorporate them,” he added.

    Warning of a brain drain, the Commission said it will support business-education partnerships to attract and keep more AI talent in Europe, set up dedicated training schemes with financial support from the European Social Fund, and support digital skills, competencies in science, technology, engineering and mathematics (STEM), entrepreneurship and creativity.

    “Many jobs will be created, but others will disappear and most will be transformed. This is why the Commission is encouraging Member States to modernise their education and training systems and support labour market transitions,” it said.

    Thee aim is to “maximise the impact of investments at the EU and national levels” and clearly articulate a way member states can use AI together to maintain the EU’s global competitiveness, the EC said.

    The initial investment from the EC is expected to prompt a further €2.5 billion of funding from existing public-private partnerships across areas such as big data and robotics. Plans for continent-wide AI roadmap also look to focus on supporting the development of AI across all major sectors from transport to healthcare, as well as connecting and strengthening AI centres across Europe.

    “The European Commission is taking the right approach to AI. Their strategy is grounded in ethics and a commitment to responsibility, it avoids a premature push to regulate, and its focus on bringing together industry, government and academic expertise is essential in positioning Europe to help shape the AI future,” Liam Benham, VP of Government and Regulatory Affairs at IBM, said.

    [​IMG]
    urrounding the modern AI boom and it is important to ensure that key stakeholders in government have the knowledge and tools they need to shape policies, regulations, and budgets in our AI future.”

    The Commission also outlined its aim to set out ethical guidelines for AI technology.

    “The Commission will present ethical guidelines on AI development by the end of 2018, based on the EU’s Charter of Fundamental Rights,” the EC stated in a release. “As AI is already part of our everyday lives, Europe wants to be at the forefront of these developments.”

    From today, and following the Declaration of Co-operation signed by 24 member states, the Commission will begin the process by working with Member States to produce a coordinated AI plan by the end of 2018.

     
  4. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,789
    Messages:
    16,624
    Likes Received:
    20,436
    Trophy Points:
    931
    Artificial Intelligence Is Stuck. Here’s How to Move It Forward.
    By Gary Marcus, July 29, 2017
    https://www.nytimes.com/2017/07/29/...ce-is-stuck-heres-how-to-move-it-forward.html

    "Artificial Intelligence is colossally hyped these days, but the dirty little secret is that it still has a long, long way to go. Sure, A.I. systems have mastered an array of games, from chess and Go to “Jeopardy” and poker, but the technology continues to struggle in the real world. Robots fall over while opening doors, prototype driverless cars frequently need human intervention, and nobody has yet designed a machine that can read reliably at the level of a sixth grader, let alone a college student. Computers that can educate themselves — a mark of true intelligence — remain a dream.

    Even the trendy technique of “deep learning,” which uses artificial neural networks to discern complex statistical correlations in huge amounts of data, often comes up short. Some of the best image-recognition systems, for example, can successfully distinguish dog breeds, yet remain capable of major blunders, like mistaking a simple pattern of yellow and black stripes for a school bus. Such systems can neither comprehend what is going on in complex visual scenes (“Who is chasing whom and why?”) nor follow simple instructions (“Read this story and summarize what it means”).

    Although the field of A.I. is exploding with microdiscoveries, progress toward the robustness and flexibility of human cognition remains elusive. Not long ago, for example, while sitting with me in a cafe, my 3-year-old daughter spontaneously realized that she could climb out of her chair in a new way: backward, by sliding through the gap between the back and the seat of the chair. My daughter had never seen anyone else disembark in quite this way; she invented it on her own — and without the benefit of trial and error, or the need for terabytes of labeled data.

    Presumably, my daughter relied on an implicit theory of how her body moves, along with an implicit theory of physics — how one complex object travels through the aperture of another. I challenge any robot to do the same. A.I. systems tend to be passive vessels, dredging through data in search of statistical correlations; humans are active engines for discovering how things work.

    To get computers to think like humans, we need a new A.I. paradigm, one that places “top down” and “bottom up” knowledge on equal footing. Bottom-up knowledge is the kind of raw information we get directly from our senses, like patterns of light falling on our retina. Top-down knowledge comprises cognitive models of the world and how it works.

    Deep learning is very good at bottom-up knowledge, like discerning which patterns of pixels correspond to golden retrievers as opposed to Labradors. But it is no use when it comes to top-down knowledge. If my daughter sees her reflection in a bowl of water, she knows the image is illusory; she knows she is not actually in the bowl. To a deep-learning system, though, there is no difference between the reflection and the real thing, because the system lacks a theory of the world and how it works. Integrating that sort of knowledge of the world may be the next great hurdle in A.I., a prerequisite to grander projects like using A.I. to advance medicine and scientific understanding.

    I fear, however, that neither of our two current approaches to funding A.I. research — small research labs in the academy and significantly larger labs in private industry — is poised to succeed. I say this as someone who has experience with both models, having worked on A.I. both as an academic researcher and as the founder of a start-up company, Geometric Intelligence, which was recently acquired by Uber.

    Academic labs are too small. Take the development of automated machine reading, which is a key to building any truly intelligent system. Too many separate components are needed for any one lab to tackle the problem. A full solution will incorporate advances in natural language processing (e.g., parsing sentences into words and phrases), knowledge representation (e.g., integrating the content of sentences with other sources of knowledge) and inference (reconstructing what is implied but not written). Each of those problems represents a lifetime of work for any single university lab.

    Corporate labs like those of Google and Facebook have the resources to tackle big questions, but in a world of quarterly reports and bottom lines, they tend to concentrate on narrow problems like optimizing advertisement placement or automatically screening videos for offensive content. There is nothing wrong with such research, but it is unlikely to lead to major breakthroughs. Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesn’t understand a word of what it is translating.

    I look with envy at my peers in high-energy physics, and in particular at CERN, the European Organization for Nuclear Research, a huge, international collaboration, with thousands of scientists and billions of dollars of funding. They pursue ambitious, tightly defined projects (like using the Large Hadron Collider to discover the Higgs boson) and share their results with the world, rather than restricting them to a single country or corporation. Even the largest “open” efforts at A.I., like OpenAI, which has about 50 staff members and is sponsored in part by Elon Musk, is tiny by comparison.

    An international A.I. mission focused on teaching machines to read could genuinely change the world for the better — the more so if it made A.I. a public good, rather than the property of a privileged few.

    Gary Marcus is a professor of psychology and neural science at New York University."
     
  5. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,607
    Messages:
    1,177
    Likes Received:
    2,562
    Trophy Points:
    181
  6. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,607
    Messages:
    1,177
    Likes Received:
    2,562
    Trophy Points:
    181
  7. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,607
    Messages:
    1,177
    Likes Received:
    2,562
    Trophy Points:
    181
  8. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,607
    Messages:
    1,177
    Likes Received:
    2,562
    Trophy Points:
    181
    CLARIFYING: BLOCKCHAIN IS DISTRIBUTED LEDGER TECHNOLOGY
    Take a look at this logic diagram (source: WEF, Blockchain Beyond the Hype report - April 2018). The recommendation to use Blockchain is based on 5 or 6 'Yes' and an equal number of 'No'.

    ● THE YES's (leading to blockchain applicability)

    1. Trying to remove intermediaries or brokers
    2. Working with digital assets (versus physical assets)
    3. Requiring a permanent authoritative record of the digital assets
    4. Managing digital assets' contractual relationships or value exchange
    5. Requiring shared write access, to create digital assets
    6. Need to be able to control functionality (public blockchain)

    ● THE NO's (leading to blockchain applicability)

    1. Not requiring high performance, rapid (~millisecond) transactions
    2. Not intending to store large amounts of non-transactional data
    3. Not relying on a trusted party (e.g. for compliance/liability reasons)
    4. Contributors don't know/trust each other and/or don't have unified or well-aligned interests
    5. No need to be able to control functionality (private blockchain)

    Finally, the public or private blockchain (distributed ledger technology) applicability depends on whether the transactions should be public or not.
    bc.png
     
  9. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,607
    Messages:
    1,177
    Likes Received:
    2,562
    Trophy Points:
    181
  10. Dr. AMK

    Dr. AMK The Strategist

    Reputations:
    1,607
    Messages:
    1,177
    Likes Received:
    2,562
    Trophy Points:
    181
    PREDICTING FUTURE RISKS (PDF)
    Forecasting and management by their very nature are evolving practices. Yet, as the business landscape continues to transform due to groundbreaking new technology, geopolitical uncertainty and an increase in public scrutiny, to mention just a few, preparing for the next major corporate risk will continue to become more challenging - and harder to predict
    riskmanagement.jpeg
     
    Last edited: Apr 30, 2018
Loading...
Similar Threads - Blockchain Cryptocurrency Digital
  1. Dr. AMK
    Replies:
    1
    Views:
    162

Share This Page