Learning AI at school — a peek into the black box

“In the near future, perhaps sooner than we think, virtually everyone will need a basic understanding of the technologies that underpin machine learning and artificial intelligence.” — from the 2018 Informatics Europe & EUACM report about machine learning

As the quote above highlights, AI and machine learning (ML) are increasingly affecting society and will continue to change the landscape of work and leisure — with a huge impact on young people in the early stages of their education.

But how are we preparing our young people for this future? What skills do they need, and how do we teach them these skills? This was the topic of last week’s online research seminar at the Raspberry Pi Foundation, with our guest speaker Juan David Rodríguez Garcia. Juan’s doctoral studies around AI in school complement his work at the Ministry of Education and Vocational Training in Spain.

Juan David Rodríguez Garcia

Juan’s LearningML tool for young people

Juan started his presentation by sharing numerous current examples of AI and machine learning, which young people can easily relate to and be excited to engage with, and which will bring up ethical questions that we need to be discussing with them.

Of course, it’s not enough for learners to be aware of AI applications. While machine learning is a complex field of study, we need to consider what aspects of it we can make accessible to young people to enable them to learn about the concepts, practices, and skills underlying it. During his talk Juan demonstrated a tool called LearningML, which he has developed as a practical introduction to AI for young people.

Screenshot of a demo of Juan David Rodríguez Garcia's LearningML tool

Juan demonstrates image recognition with his LearningML tool

LearningML takes inspiration from some of the other in-development tools around machine learning for children, such as Machine Learning for Kids, and it is available in one integrated platform. Juan gave an enticing demo of the tool, showing how to use visual image data (lots of pictures of Juan with hats, glasses on, etc.) to train and test a model. He then demonstrated how to use Scratch programming to also test the model and apply it to new data. The seminar audience was very positive about the LearningML, and of course we’d like it translated into English!

Juan’s talk generated many questions from the audience, from technical questions to the key question of the way we use the tool to introduce children to bias in AI. Seminar participants also highlighted opportunities to bring machine learning to other school subjects such as science.

AI in schools — what and how to teach

Machine learning demonstrates that computers can learn from data. This is just one of the five big ideas in AI that the AI4K12 group has identified for teaching AI in school in order to frame this broad domain:

  1. Perception: Computers perceive the world using sensors
  2. Representation & reasoning: Agents maintain models/representations of the world and use them for reasoning
  3. Learning: Computers can learn from data
  4. Natural interaction: Making agents interact comfortably with humans is a substantial challenge for AI developers
  5. Societal impact: AI applications can impact society in both positive and negative ways

One general concern I have is that in our teaching of computing in school (if we touch on AI at all), we may only focus on the fifth of the ‘big AI ideas’: the implications of AI for society. Being able to understand the ethical, economic, and societal implications of AI as this technology advances is indeed crucial. However, the principles and skills underpinning AI are also important, and how we introduce these at an age-appropriate level remains a significant question.

Illustration of AI, Image by Seanbatty from Pixabay

There are some great resources for developing a general understanding of AI principles, including unplugged activities from Computer Science For Fun. Yet there’s a large gap between understanding what AI is and has the potential to do, and actually developing the highly mathematical skills to program models. It’s not an easy issue to solve, but Juan’s tool goes a little way towards this. At the Raspberry Pi Foundation, we’re also developing resources to bridge this educational gap, including new online projects building on our existing machine learning projects, and an online course. Watch this space!

AI in the school curriculum and workforce

All in all, we seem to be a long way off introducing AI into the school curriculum. Looking around the world, in the USA, Hong Kong, and Australia there have been moves to introduce AI into K-12 education through pilot initiatives, and hopefully more will follow. In England, with a computing curriculum that was written in 2013, there is no requirement to teach any AI or machine learning, or even to focus much on data.

Let’s hope England doesn’t get left too far behind, as there is a massive AI skills shortage, with millions of workers needing to be retrained in the next few years. Moreover, a recent House of Lords report outlines that introducing all young people to this area of computing also has the potential to improve diversity in the workforce — something we should all be striving towards.

We look forward to hearing more from Juan and his colleagues as this important work continues.

Next up in our seminar series

If you missed the seminar, you can find Juan’s presentation slides and a recording of his talk on our seminars page.

In our next seminar on Tuesday 2 June at 17:00–18:00 BST / 12:00–13:00 EDT / 9:00–10:00 PDT / 18:00–19:00 CEST, we’ll welcome Dame Celia Hoyles, Professor of Mathematics Education at University College London. Celia will be sharing insights from her research into programming and mathematics. To join the seminar, simply sign up with your name and email address and we’ll email the link and instructions. If you attended Juan’s seminar, the link remains the same.

The post Learning AI at school — a peek into the black box appeared first on Raspberry Pi.

Noticia Original

#FreePCB via Twitter to 2 random RTs

Every Tuesday we give away two coupons for the free PCB drawer via Twitter. This post was announced on Twitter, and in 24 hours we’ll send coupon codes to two random retweeters. Don’t forget there’s free PCBs three times a every week:

  • Hate Twitter and Facebook? Free PCB Sunday is the classic PCB giveaway. Catch it every Sunday, right here on the blog
  • Tweet-a-PCB Tuesday. Follow us and get boards in 144 characters or less
  • Facebook PCB Friday. Free PCBs will be your friend for the weekend

Some stuff:

  • Yes, we’ll mail it anywhere in the world!
  • Check out how we mail PCBs worldwide video.
  • We’ll contact you via Twitter with a coupon code for the PCB drawer.
  • Limit one PCB per address per month please.
  • Like everything else on this site, PCBs are offered without warranty.

We try to stagger free PCB posts so every time zone has a chance to participate, but the best way to see it first is to subscribe to the RSS feed, follow us on Twitter, or like us on Facebook.

from Dangerous Prototypes https://ift.tt/3ejTcb2

Meet your new robotic best friend: the MiRo-E dog

When you’re learning a new language, it’s easier the younger you are. But how can we show very young students that learning to speak code is fun? Consequential Robotics has an answer…

The MiRo-E is an ’emotionally engaging’ robot platform that was created on a custom PCB  and has since moved onto Raspberry Pi. The creators made the change because they saw that schools were more familiar with Raspberry Pi and realised the potential in being able to upgrade the robotic learning tools with new Raspberry Pi boards.

The MiRo-E was born from a collaboration between Sheffield Robotics, London-based SCA design studio, and Bristol Robotics Lab. The cute robo-doggo has been shipping with Raspberry Pi 3B+ (they work well with the Raspberry Pi 4 too) for over a year now.

While the robot started as a developers’ tool (MiRo-B), the creators completely re-engineered MiRo’s mechatronics and software to turn it into an educational tool purely for the classroom environment.

Three school children in uniforms stroke the robot dog's chin

MiRo-E with students at a School in North London, UK

MiRo-E can see, hear, and interact with its environment, providing endless programming possibilities. It responds to human interaction, making it a fun, engaging way for students to learn coding skills. If you stroke it, it purrs, lights up, move its ears, and wags its tail. Making a sound or clapping makes MiRo move towards you, or away if it is alarmed. And it especially likes movement, following you around like a real, loyal canine friend. These functionalities are just the basic starting point, however: students can make MiRo do much more once they start tinkering with their programmable pet.

These opportunities are provided on MiRoCode, a user-friendly web-based coding interface, where students can run through lesson plans and experiment with new ideas. They can test code on a virtual MiRo-E to create new skills that can be applied to a real-life MiRo-E.

What’s inside?

Here are the full technical specs. But basically, MiRo-E comprises a Raspberry Pi 3B+ as its core, light sensors, cliff sensors, an HD camera, and a variety of connectivity options.

How does it interact?

MiRo reacts to sound, touch, and movement in a variety of ways. 28 capacitive touch sensors tell it when it is being petted or stroked. Six independent RGB LEDs allow it to show emotion, along with DOF to move its eyes, tail, and ears. Its ears also house four 16-bit microphones and a loudspeaker. And two differential drive wheels with opto-sensors help MiRo move around.

What else can it do?

The ‘E’ bit of MiRo-E means it’s emotionally engaging, and the intelligent pet’s potential in healthcare have already been explored. Interaction with animals has been proved to be positive for patients of all ages, but sometimes it’s not possible for ‘real’ animals to comfort people. MiRo-E can fill the gap for young children who would benefit from animal comfort, but where healthcare or animal welfare risks are barriers.

The same researchers who created this emotionally engaging robo-dog for young people are also working with project partners in Japan to develop ‘telepresence robots’ for older patients to interact with their families over video calls.

The post Meet your new robotic best friend: the MiRo-E dog appeared first on Raspberry Pi.

Noticia Original

Experimental Zener diode tester

Dilshan has published a new build:

Automatic Zener diode tester is capable of identifying Zener diodes up to 27.5V. Apart from that, it can be used to recognize leads of the diodes/Zeners and detect damaged diodes. This tester is designed using well-known ICs such as MC34063 and PIC16F88.
This unit provides approximately 5% to 15% accurate readings. Based on our observations, the accuracy of this unit can increase by using resistors with 1% tolerance, stable booster circuit, accurate sampling method(s), and with a more optimized PCB layout.

More details on Dilshan Jayakody’s blog. Project files are available on GitHub.

from Dangerous Prototypes https://ift.tt/2zkWc8r

App note: P-channel power MOSFETs and applications

Another app note from IXYS on P-Channel power MOSFET application. Link here (PDF)

IXYS P-Channel Power MOSFETs retain all the features of comparable N-Channel Power MOSFETs such as very fast switching, voltage control, ease of paralleling and excellent temperature stability. These are designed for applications that require the convenience of reverse polarity operation. They have an n-type body region that provides lower resistivity in the body region and good avalanche characteristics because parasitic PNP transistor is less prone to turn-on. In comparison with Nchannel Power MOSFETs with similar design features, P-channel Power MOSFETs have better FBSOA (Forward Bias Safe Operating Area) and practically immune to Single Event Burnout phenomena. Main advantage of P-channel Power MOSFETs is the simplified gate driving technique in high-side (HS) switch position.

from Dangerous Prototypes https://ift.tt/3bZ1dAz

App note: Parallel operation of IGBT discrete devices

Guidelines for parallel operation of IGBT devices discuss in this app note from IXYS. Link here (PDF)

As applications for IGBT components have continued to expand rapidly, semiconductor manufacturers have responded by providing IGBTs in both discrete and modular packages to meet the needs of their customers. Discrete IGBTs span the voltage range of 250V to 1400V and are available up to 75A (DC), which is the maximum current limit for both the TO-247 and TO-264 terminals. IGBT modules cover the same voltage range but, due to their construction, can control currents up to 1000A today. However, on an Ampere per dollar basis, the IGBT module is more expensive so that for cost-sensitive applications, e.g. welding, low voltage motor control, small UPS, etc., designs engineers would like to parallel discrete IGBT devices.

from Dangerous Prototypes https://ift.tt/3cYDPVp

cVert, a truly random MIDI controller

cVert, a truly random MIDI controller @ danny.makesthings.work

cVert is the result of an idea I’ve been kicking around for years, and took a few months of work to bring to fruition. The idea was to use a Geiger counter as a true random number generator to give a non-deterministic input for computer art or music. The result is a MIDI controller with a large amount of control removed – it plays a random musical note every time a radioactive decay is detected.

All files are available on GitHub.

Check out the video after the break.

from Dangerous Prototypes https://ift.tt/2ZrVWii

DIY home made portable oscilloscope

An ATmega328 based portable home made oscilloscope with ADC from Creative Engineering:

It is basically a small scaled digital oscilloscope. It is capable of displaying all type of waveform like sine, triangular, square, etc. It’s bandwidth is above 1 MHz and input impedance is about 600K. The device is mainly using the ATmega328 micro-controller as the heart and is assisted by a high performance ADC (TLC5510) which is capable of taking up-to 20 mega samples per second and thus increasing the span of bandwidth which can be analyzed by our device. In addition to that, in-order to make the device portable Li-ion battery is used , which will be suitable to be fitted into a confined space.

See project details on Creative Engineering blog.

Check out the video after the break:

from Dangerous Prototypes https://ift.tt/3ggNU1W