The Howff 3D scanning rig| The MagPi 99

How do you create a 3D model of a historic graveyard? With eight Raspberry Pi computers, as Rob Zwetsloot discovers in the latest issue of The MagPi magazine, out now.

The software builds up the 3D model of the graveyard

“In the city centre of Dundee is a historical burial ground, The Howff,” says Daniel Muirhead. We should probably clarify that he’s a 3D artist. “This old graveyard is densely packed with around 1500 gravestones and other funerary monuments, which happens to make it an excellent technical challenge for photogrammetry photo capture.”

This architecture, stone paths, and vibrant flora is why Daniel ended up creating a 3D-scanning rig out of eight Raspberry Pi computers. And the results are quite stunning.

Eight Raspberry Pi computers are mounted to the ball, with cameras pointing towards the ground

“The goal of this project was to capture photos for use in generating a 3D model of the ground,” he continues. “That model will be used as a base for attaching individual gravestone models and eventually building up a full composite model of this complex subject. The ground model will also be purposed for rendering an ultra-high-resolution map of the graveyard. The historical graveyard has a very active community group that are engaged in its study and digitisation, the Dundee Howff Conservation Group, so I will be sharing my digital outputs with them.”

Google graveyard

There are thousands of pictures, like this one, being used to create the model

To move the rig throughout the graveyard, Daniel used himself as the major moving part. With the eight Raspberry Pi cameras taking a photo every two seconds, he was able to capture over 180,000 photos over 13 hours of capture sessions.

“The rig was held above my head and the cameras were angled in such a way as to occlude me from view, so I was not captured in the photographs which instead were focused on the ground,” he explains. “Of the eight cameras, four were the regular model with 53.5 ° horizontal field of view (FoV), and the other four were a wide-angle model with 120 ° FoV. These were arranged on the rig pointing outwards in eight different directions, alternating regular and wide-angle, all angled at a similar pitch down towards the ground. During capture, the rig was rotated by +45 ° for every second position, so that the wide-angles were facing where the regulars had been facing on the previous capture, and vice versa.”
Daniel worked according to a very specific grid pattern, staying in one spot for five seconds at a time, with the hopes that at the end he’d have every patch of ground photographed from 16 different positions and angles.

Maker Daniel Muirhead is a 3D artist with an interest in historical architecture

“With a lot of photo data to scan through for something fairly complex, we wondered how well the system had worked. Daniel tells us the only problems he had were with some bug fixing on his code: “The images were separated into batches of around 10,000 (1250 photos from each of the eight cameras), plugged into the photogrammetry software, and the software had no problem in reconstructing the ground as a 3D model.”

Accessible 3D surveying

He’s now working towards making it accessible and low-cost to others that might want it. “Low-cost in the triple sense of financial, labour, and time,” he clarifies. “I have logged around 8000 hours in a variety of photogrammetry softwares, in the process capturing over 300,000 photos with a regular camera for use in such files, so I have some experience in this area.”

“With the current state of technology, it should be possible with around £1000 in equipment to perform a terrestrial photo-survey of a town centre in under an hour, then with a combined total of maybe three hours’ manual processing and 20 hours’ automated computer processing, generate a high-quality 3D model, the total production time being under 24 hours. It should be entirely plausible for a local community group to use such a method to perform weekly (or at least monthly) 3D snapshots of their town centre.”

The MagPi issue 99 – Out now

The MagPi magazine is out now, available in print from the Raspberry Pi Press onlinestore, your local newsagents, and the Raspberry Pi Store, Cambridge.

You can also download the PDF directly from the MagPi magazine website.

The post The Howff 3D scanning rig| The MagPi 99 appeared first on Raspberry Pi.

Noticia Original

Defining the future: The talks of TED Salon: Dell Technologies

“The single biggest threat of climate change is the collapse of food systems,” says journalist Amanda Little, quoting USDA scientist Jerry Hatfield. “Addressing this challenge as much as any other is going to define our progress in the coming century.” She speaks at TED Salon: Dell Technologies on October 22, 2020. (Photo courtesy of TED)

In a time that feels unsettled and uncertain, technology and those who create it will play a crucial role in what’s coming next. How do we define that future, as opposed to letting it define us? At a special TED Salon held as part of the Dell Technologies World conference and hosted by TED’s Simone Ross, four speakers shared ideas for building a future where tech and humanity are combined in a more active, deliberate and thoughtful way.

The talks in brief:

Genevieve Bell, ethical AI expert

Big idea: To create a sustainable, efficient and safe future for artificial intelligence systems, we need to ask questions that contextualize the history of technology and create possibilities for the next generation of critical thinkers to build upon it. 

How? Making a connection between AI and the built world is a hard story to tell, but that’s exactly what Genevieve Bell and her team at 3A Institute are doing: adding to the rich legacy of AI systems, while establishing a new branch of engineering that can sustainably bring cyber-physical systems and AI to scale going forward. “To build on that legacy and our sense of purpose, I think we need a clear framework for asking questions about the future, questions for which there aren’t ready or easy answers,” Bell says. She shares six nuanced questions that frame her approach: Is the system autonomous? Does the system have agency? How do we think about assurance (is it safe and functioning)? How do we interface with it? What will be the indicators that show it is working well? And finally, what is its intent? With these questions, we can broaden our understanding of the systems we create and how they will function in the years to come. 


Amanda Little, food journalist

Big ideaTo build a robust, resilient and diverse food future in the face of complex challenges, we need a “third way” forward — blending the best of traditional agriculture with cutting-edge new technologies.

How? COVID-19 has simultaneously paralyzed already vulnerable global food systems and ushered in food shortages — despite a surplus of technological advances. How will we continue to feed a growing population? Amanda Little has an idea: “Our challenge is to borrow from the wisdom of the ages and from our most advanced science to [a] third way: one that allows us to improve and scale our harvest while restoring, rather than degrading the underlying land of life.” Amid increasingly complex disruptions like climate change, this “third way” provides a roadmap to food security that marries old agricultural production with new, innovative farming practices — like using robots to deploy fertilizer on crop fields with sniper-like precision, eating lab-grown meats and building aeroponic farms. By nixing antiquated supply chains and producing food in a scalable, sustainable and adaptable way, Little shows just how bright our food future might be. Watch the full talk.


“Investing in data quality and accuracy is essential to making AI possible — not only for the few and privileged but for everyone in society,” says data scientist Mainak Mazumdar. He speaks at TED Salon: Dell Technologies on October 22, 2020. (Photo courtesy of TED)

Mainak Mazumdar, data scientist

Big idea: When the pursuit of using AI to make fair and equitable decisions fails, blame the data — not the algorithms.

Why? The future economy won’t be built by factories and people, but by computers and algorithms — for better or for worse. To make AI possible for humanity and society, we need an urgent reset in three major areas: data infrastructure, data quality and data literacy. Together, they hold the key to ethical decision-making in the age of AI. Mazumdar lists how less-than-quality data in examples such as the 2020 US Census and marketing research could lead to poor results in trying to reach and help specific demographics. Right now, AI is only reinforcing and accelerating our bias at speed and scale, with societal implications in its wake. But it doesn’t need to be that way. Instead of racing to build new algorithms, our mission should be to build a better data infrastructure that makes ethical AI possible.


Paul D. Miller aka DJ Spooky, multimedia musician

Big idea: Modern computing is founded on patterns, so could you translate the patterns of code and data into music? If so, what would the internet sound like?

How? Cultural achievements throughout human history, like music and architecture, are based on pattern recognition, math and the need to organize information — and the internet is no different. Paul D. Miller aka DJ Spooky gives a tour of how the internet came to be, from the conception of software by Ada Lovelace in the early 1800s to the development of early computers catalyzed by World War II and the birth of the internet beginning in 1969. Today, millions of devices are plugged into the internet, sending data zooming around the world. By transforming the internet’s router connections and data sets into sounds, beats and tempos, Miller introduces “Quantopia,” a portrait of the internet in sound. A special auditory and visual experience, this internet soundscape reveals the patterns that connect us all.

from TED Blog https://ift.tt/3jEeK4t

YouTuber Jeff Geerling reviews Raspberry Pi Compute Module 4

We love seeing how quickly our community of makers responds when we drop a new product, and one of the fastest off the starting block when we released the new Raspberry Pi Compute Module 4 on Monday was YouTuber Jeff Geerling.

Jeff Geerling

We made him keep it a secret until launch day after we snuck one to him early so we could see what one of YouTube’s chief advocates for our Compute Module line thought of our newest baby.

So how does our newest board compare to its predecessor, Compute Module 3+? In Jeff’s first video (above) he reviews some of Compute Module 4’s new features, and he has gone into tons more detail in this blog post.

Jeff also took to live stream for a Q&A (above) covering some of the most asked questions about Compute Module 4, and sharing some more features he missed in his initial review video.

His next video (above) is pretty cool. Jeff explains:

“Everyone knows you can overclock the Pi 4. But what happens when you overclock a Compute Module 4? The results surprised me!”

Jeff Geerling

And again, there’s tons more detail on temperature measurement, storage performance, and more on Jeff’s blog.

Top job, Jeff. We have our eyes on your channel for more videos on Compute Module 4, coming soon.

If you like what you see on his YouTube channel, you can also sponsor Jeff on GitHub, or support his work via Patreon.

The post YouTuber Jeff Geerling reviews Raspberry Pi Compute Module 4 appeared first on Raspberry Pi.

Noticia Original

Digital making projects about protecting our planet

Explore our new free pathway of environmental digital making projects for young people! These new step-by-step projects teach learners Scratch coding and include real-world data — from data about the impact of deforestation on wildlife to sea turtle tracking information.

By following along with the digital making projects online, young people will discover how they can use technology to protect our planet, all while improving their computing skills.

Photo of a young woman holding an origami bird up to the camera
One of the new projects is an automatic creature counter based on colour recognition with Scratch

The projects help young people affect change

In the projects, learners are introduced to 5 of the United Nations’ 17 Sustainable Development Goals (SDGs) with an environment focus:

  • Affordable and Clean Energy
  • Responsible Consumption and Production
  • Climate Action
  • Life Below Water
  • Life on Land
Screenshot of a Scratch project showing a panda and the Earth
The first project in the new pathway is an animation about the UN’s five SDGs focused on the environment.

Technology, science, maths, geography, and design all play a part in the projects. Following along with the digital making projects, young people learn coding and computing skills while drawing on a range of data from across the world. In this way they will discover how computing can be harnessed to collect environmental data, to explore causes of environmental degradation, to see how humans influence the environment, and ultimately to mitigate negative effects.

Where does the real-world data come from?

To help us develop these environmental digital making projects, we reached out to a number of organisations with green credentials:

Green Sea Turtle Alasdair Davies Raspberry Pi
A sea turtle is being tagged so its movements can be tracked

Inspiring young people about coding with real-world data

The digital making projects, created with 9- to 11-year-old learners in mind, support young people on a step-by-step pathway to develop their skills gradually. Using the block-based visual programming language Scratch, learners build on programming foundations such as sequencing, loops, variables, and selection. The project pathway is designed so that learners can apply what they learned in earlier projects when following along with later projects!

The final project in the pathway, ‘Turtle tracker’, uses real-world data of migrating sea turtles!

We’re really excited to help learners explore the relationship between technology and the environment with these new digital making projects. Connecting their learning to real-world scenarios not only allows young people to build their knowledge of computing, but also gives them the opportunity to affect change and make a difference to their world!

Discover the new digital making projects yourself!

With Green goals, learners create an animation to present the United Nations’ environment-focused Sustainable Development Goals.

Through Save the shark, young people explore sharks’ favourite food source (fish, not humans!), as well as the impact of plastic in the sea, which harms sharks in their natural ocean habitat.

Illustration of a shark with sunglasses

With the Tree life simulator project guide, learners create a project that shows the impact of land management and deforestation on trees, wildlife, and the environment.

Computers can be used to study wildlife in areas where it’s not practical to do so in person. In Count the creatures, learners create a wildlife camera using their computer’s camera and Scratch’s new video sensing extension!

Electricity is important. After all, it powers the computer that learners are using! In Electricity generation, learners input real data about the type and amount of natural resources countries across the world use to generate electricity, and they then compare the results using an animated data visualisation.

Understanding the movements of endangered turtles helps to protect these wonderful animals. In this new Turtle tracker project, learners use tracking data from real-life turtles to map their movements off the coast of West Africa.

Code along wherever you are!

All of our projects are free to access online at any time and include step-by-step instructions. They can be undertaken in a club, classroom, or at home. Young people can share the project they create with their peers, friends, family, and the wider Scratch community.

Visit the Protect our planet pathway to experience the projects yourself.

The post Digital making projects about protecting our planet appeared first on Raspberry Pi.

Noticia Original

Talk to your Raspberry Pi | HackSpace 36

In the latest issue of HackSpace Magazine, out now, @MrPJEvans shows you how to add voice commands to your projects with a Raspberry Pi 4 and a microphone.

You’ll need:

It’s amazing how we’ve come from everything being keyboard-based to so much voice control in our lives. Siri, Alexa, and Cortana are everywhere and happy to answer questions, play you music, or help automate your household.

For the keen maker, these offerings may not be ideal for augmenting their latest project as they are closed systems. The good news is, with a bit of help from Google, you can add voice recognition to your project and have complete control over what happens. You just need a Raspberry Pi 4, a speaker array, and a Google account to get started.

Set up your microphone

This clever speaker uses four microphones working together to increase accuracy. A ring of twelve RGB LEDs can be coded to react to events, just like an Amazon Echo

For a home assistant device, being able to hear you clearly is an essential. Many microphones are either too low-quality for the task, or are unidirectional: they only hear well in one direction. To the rescue comes Seeed’s ReSpeaker, an array of four microphones with some clever digital processing to provide the kind of listening capability normally found on an Amazon Echo device or Google Assistant. It’s also in a convenient HAT form factor, and comes with a ring of twelve RGB LEDs, so you can add visual effects too. Start with a Raspberry Pi OS Lite installation, and follow these instructions to get your ReSpeaker ready for use.

Install Snowboy

You’ll see later on that we can add the power of Google’s speech-to-text API by streaming audio over the internet. However, we don’t want to be doing that all the time. Snowboy is an offline ‘hotword’ detector. We can have Snowboy running all the time, and when your choice of word is ‘heard’, we switch to Google’s system for accurate processing. Snowboy can only handle a few words, so we only use it for the ‘trigger’ words. It’s not the friendliest of installations so, to get you up and running, we’ve provided step-by-step instructions.

There’s also a two-microphone ReSpeaker for the Raspberry Pi Zero

Create your own hotword

As we’ve just mentioned, we can have a hotword (or trigger word) to activate full speech recognition so we can stay offline. To do this, Snowboy must be trained to understand the word chosen. The code that describes the word (and specifically your pronunciation of it) is called the model. Luckily, this whole process is handled for you at snowboy.kitt.ai, where you can create a model file in a matter of minutes and download it. Just say your choice of words three times, and you’re done. Transfer the model to your Raspberry Pi 4 and place it in your home directory.

Let’s go Google

ReSpeaker can use its multiple mics to detect distance and direction

After the trigger word is heard, we want Google’s fleet of super-servers to help us transcribe what is being said. To use Google’s speech-to-text API, you will need to create a Google application and give it permissions to use the API. When you create the application, you will be given the opportunity to download ‘credentials’ (a small text file) which will allow your setup to use the Google API. Please note that you will need a billable account for this, although you get one hour of free speech-to-text per month. Full instructions on how to get set up can be found here.

Install the SDK and transcriber

To use Google’s API, we need to install the firm’s speech-to-text SDK for Python so we can stream audio and get the results. On the command line, run the following:pip3 install google-cloud-speech
(If you get an error, run sudo apt install python3-pip then try again).
Remember that credentials file? We need to tell the SDK where it is:
export GOOGLE_APPLICATION_CREDENTIALS="/home/pi/[FILE_NAME].json"
(Don’t forget to replace [FILE_NAME] with the actual name of the JSON file.)
Now download and run this test file. Try saying something and see what happens!

Putting it all together

Now we can talk to our Raspberry Pi, it’s time to link the hotword system to the Google transcription service to create our very own virtual assistant. We’ve provided sample code so that you can see these two systems running together. Run it, then say your chosen hotword. Now ask ‘what time is it?’ to get a response. (Don’t forget to connect a speaker to the audio output if you’re not using HDMI.) Now it’s over to you. Try adding code to respond to certain commands such as ‘turn the light on’, or ‘what time is it?’

Get HackSpace magazine 36 Out Now!

Each month, HackSpace magazine brings you the best projects, tips, tricks and tutorials from the makersphere. You can get it from the Raspberry Pi Press online store, The Raspberry Pi store in Cambridge, or your local newsagents.

Each issue is free to download from the HackSpace magazine website.

The post Talk to your Raspberry Pi | HackSpace 36 appeared first on Raspberry Pi.

Noticia Original

FSK LDPC data mode

David writes:

I’m developing an open source data mode using a FSK modem and powerful LDPC codes. The initial use case is the Open IP over UHF/VHF project, but it’s available in the FreeDV API as a general purpose mode for sending data over radio channels.

Via Rowetel.

from Dangerous Prototypes https://ift.tt/3kAPhKy

Take part in the PA Raspberry Pi Competition for UK schools

Every year, we support the PA Raspberry Pi Competition for UK schools, run by PA Consulting. In this free competition, teams of students from schools all over the UK imagine, design, and create Raspberry Pi–powered inventions.

Female engineer with Raspberry Pi device. Copyright © University of Southampton
Let’s inspire young people to take up a career in STEM!
© University of Southampton

The PA Raspberry Pi Competition aims to inspire young people aged 8 to 18 to learn STEM skills, teamwork, and creativity, and to move toward a career in STEM.

We invite all UK teachers to register if you have students at your school who would love to take part!

For the first 100 teams to complete registration and submit their entry form, PA Consulting provides a free Raspberry Pi Starter Kit to create their invention.

This year’s competition theme: Innovating for a better world

The theme is deliberately broad so that teams can show off their creativity and ingenuity.

  • All learners aged 8 to 18 can take part, and projects are judged in four age groups
  • The judging categories include team passion; simplicity and clarity of build instructions; world benefit; and commercial potential
  • The proposed budget for a team’s invention is around £100
  • The projects can be part of your students’ coursework
  • Entries must be submitted by Monday 22 March 2021
  • You’ll find more details and inspiration on the PA Raspberry Pi Competition webpage

Among all the entries, judges from the tech sector and the Raspberry Pi Foundation choose the finalists with the most outstanding inventions in their age group.

The Dynamix team, finalists in last round’s Y4–6 group, built a project called SmartRoad+

The final teams get to take part in an exciting awards event to present their creations so that the final winners can be selected. This round’s PA Raspberry Pi Awards Ceremony takes place on Wednesday 28 April 2021, and PA Consulting are currently considering whether this will be a physical or virtual event.

All teams that participate in the competition will be rewarded with certificates, and there’s of course the chance to win trophies and prizes too!

You can prepare with our free online courses

If you would like to boost your skills so you can better support your team, then sign up to one of our free online courses designed for educators:

Take inspiration from the winners of the previous round

All entries are welcome, no matter what your students’ experience is! Here are the outstanding projects from last year’s competition:

A look inside the air quality-monitoring project by Team Tempest, last round’s winners in the Y7–9 group

Find out more at the PA Raspberry Pi Competition webinar!

To support teachers in guiding their teams through the competition, PA Consulting will hold a webinar on 12 November 2020 at 4.30–5.30pm. Sign up to hear first-hand what’s involved in taking part in the PA Raspberry Pi Competition, and use the opportunity to ask questions!

The post Take part in the PA Raspberry Pi Competition for UK schools appeared first on Raspberry Pi.

Noticia Original

New book: Create Graphical User Interfaces with Python

Laura Sach and Martin O’Hanlon, who are both Learning Managers at the Raspberry Pi Foundation, have written a brand-new book to help you to get more out of your Python projects.

Cover of the book Create Graphical User Interfaces with Python

In Create Graphical User Interfaces with Python, Laura and Martin show you how to add buttons, boxes, pictures, colours, and more to your Python programs using the guizero library, which is easy to use and accessible for all, no matter your Python skills.

This new 156-page book is suitable for everyone — from beginners to experienced Python programmers — who wants to explore graphical user interfaces (GUIs).

Meet the authors

Screenshot of a Digital Making at Home live stream session
That’s Martin in the blue T-shirt with our Digital Making at Home live stream hosts Matt and Christina

You might have met Martin recently on one of our weekly Digital Making at Home live streams for young people, were he was a guest for an ‘ooey-GUI’ code-along session. He talked about his background and what it’s like creating projects and learning resources on a day-to-day basis.

Laura is also pretty cool! Here she is showing you how to solder your Raspberry Pi header pins:

Hi Laura!

Martin and Laura are also tonnes of fun on Twitter. You can find Martin as @martinohanlon, and Laura goes by @codeboom.

10 fun projects

In Create Graphical User Interfaces with Python, you’ll find ten fun Python projects to create with guizero, including a painting program, an emoji match game, and a stop-motion animation creator.

A double-page from the book Create Graphical User Interfaces with Python
A peek inside Laura’s and Martin’s new book

You will also learn:

  • How to create fun Python games and programs
  • How to code your own graphical user interfaces using windows, text boxes, buttons, images, and more
  • What event-based programming is
  • What good (and bad) user interface design is
A double-page from the book Create Graphical User Interfaces with Python
Ain’t it pretty?

Where can I get it?

You can buy Create Graphical User Interfaces with Python now from the Raspberry Pi Press online store, or the Raspberry Pi store in Cambridge, UK.

And if you don’t need the lovely new book, with its new-book smell, in your hands in real life, you can download a PDF version for free, courtesy of The MagPi magazine.

The post New book: Create Graphical User Interfaces with Python appeared first on Raspberry Pi.

Noticia Original

App note: Is the lowest forward voltage drop of real schottky diodes always the best choice?

App note from IXYS about the pros and cons of different forward voltage drop of real shottky diodes. Link here (PDF)

According to the thermionic emission model, pure Schottky barriers exhibit a forward voltage drop, which decreases linearly as the barrier height diminishes; whereas the reverse current increases exponentially as the barrier height decreases. Consequently, there exists an optimum barrier height, which can minimize the sum of forward and reverse power dissipation for a particular application.
However, discussions with the users of Schottky diodes reveal that they do not search for the minimum of forward and reverse power dissipation but always for the minimum forward voltage drop. Values of reverse current are very rarely asked for. One must know how the Schottky diode is being applied in order to objectively select the most appropriate part.

from Dangerous Prototypes https://ift.tt/35yaled

App note: Voltage vs. output speed vs. torque on DC motors

App note from Precision Microdrives about DC motor capabilities and their uses. Link here

Why Change Torque?
The most obvious benefit of varying the torque is to maintain a constant speed when the motor’s load varies, keeping in mind the interdependent nature of speed, torque, and voltage.

Although this example may be outdated, audio cassettes are a great way of explaining how some applications need to vary the torque to match a changing load. As the cassette plays and the audio tape moves from one spindle to the other, the driving motor will experience a change in load. However, the playback must remain at a constant speed throughout – otherwise the audio pitch would be affected.

Why Change Speed?
The ability to vary motor speed whilst maintaining a steady torque is essential to many applications for a variety of reasons.

An example of an application that requires a variable speed and steady torque is an audio CD player as it is commonly observed that the CD will rotate faster at certain points than others. This is because the information is stored in spiralled circular tracks on the disk and the length/circumference of the track is directly proportional to the amount of information stored on them. This means that the speed must be decreased as the laser is reading from the outermost tracks because there is more information per revolution. Inversely, the speed is increased as the laser reads from the innermost tracks as the spiral circumferences are smaller and therefore contain less information per revolution.

from Dangerous Prototypes https://ift.tt/3mlwLq1