Setting up two-factor authentication on your Raspberry Pi

Enabling two-factor authentication (2FA) to boost security for your important accounts is becoming a lot more common these days. However you might be surprised to learn that you can do the same with your Raspberry Pi. You can enable 2FA on Raspberry Pi, and afterwards you’ll be challenged for a verification code when you access it remotely via Secure Shell (SSH).

Accessing your Raspberry Pi via SSH

A lot of people use a Raspberry Pi at home as a file, or media, server. This is has become rather common with the launch of Raspberry Pi 4, which has both USB 3 and Gigabit Ethernet. However, when you’re setting up this sort of server you often want to run it “headless”; without a monitor, keyboard, or mouse. This is especially true if you intend tuck your Raspberry Pi away behind your television, or somewhere else out of the way. In any case, it means that you are going to need to enable Secure Shell (SSH) for remote access.

However, it’s also pretty common to set up your server so that you can access your files when you’re away from home, making your Raspberry Pi accessible from the Internet.

Most of us aren’t going to be out of the house much for a while yet, but if you’re taking the time right now to build a file server, you might want to think about adding some extra security. Especially if you intend to make the server accessible from the Internet, you probably want to enable two-factor authentication (2FA) using Time-based One-Time Password (TOTP).

What is two-factor authentication?

Two-factor authentication is an extra layer of protection. As well as a password, “something you know,” you’ll need another piece of information to log in. This second factor will be based either on “something you have,” like a smart phone, or on “something you are,” like biometric information.

We’re going to go ahead and set up “something you have,” and use your smart phone as the second factor to protect your Raspberry Pi.

Updating the operating system

The first thing you should do is make sure your Raspberry Pi is up to date with the latest version of Raspbian. If you’re running a relatively recent version of the operating system you can do that from the command line:

$ sudo apt-get update
$ sudo apt-get full-upgrade

If you’re pulling your Raspberry Pi out of a drawer for the first time in a while, though, you might want to go as far as to install a new copy of Raspbian using the new Raspberry Pi Imager, so you know you’re working from a good image.

Enabling Secure Shell

The Raspbian operating system has the SSH server disabled on boot. However, since we’re intending to run the board without a monitor or keyboard, we need to enable it if we want to be able to SSH into our Raspberry Pi.

The easiest way to enable SSH is from the desktop. Go to the Raspbian menu and select “Preferences > Raspberry Pi Configuration”. Next, select the “Interfaces” tab and click on the radio button to enable SSH, then hit “OK.”

You can also enable it from the command line using systemctl:

$ sudo systemctl enable ssh
$ sudo systemctl start ssh

Alternatively, you can enable SSH using raspi-config, or, if you’re installing the operating system for the first time, you can enable SSH as you burn your SD Card.

Enabling challenge-response

Next, we need to tell the SSH daemon to enable “challenge-response” passwords. Go ahead and open the SSH config file:

$ sudo nano /etc/ssh/sshd_config

Enable challenge response by changing ChallengeResponseAuthentication from the default no to yes.

Editing /etc/ssh/ssd_config.

Then restart the SSH daemon:

$ sudo systemctl restart ssh

It’s good idea to open up a terminal on your laptop and make sure you can still SSH into your Raspberry Pi at this point — although you won’t be prompted for a 2FA code quite yet. It’s sensible to check that everything still works at this stage.

Installing two-factor authentication

The first thing you need to do is download an app to your phone that will generate the TOTP. One of the most commonly used is Google Authenticator. It’s available for Android, iOS, and Blackberry, and there is even an open source version of the app available on GitHub.

Google Authenticator in the App Store.

So go ahead and install Google Authenticator, or another 2FA app like Authy, on your phone. Afterwards, install the Google Authenticator PAM module on your Raspberry Pi:

$ sudo apt install libpam-google-authenticator

Now we have 2FA installed on both our phone, and our Raspberry Pi, we’re ready to get things configured.

Configuring two-factor authentication

You should now run Google Authenticator from the command line — without using sudo — on your Raspberry Pi in order to generate a QR code:

$ google-authenticator

Afterwards you’re probably going to have to resize the Terminal window so that the QR code is rendered correctly. Unfortunately, it’s just slightly wider than the standard 80 characters across.

The QR code generated by google-authenticator. Don’t worry, this isn’t the QR code for my key; I generated one just for this post that I didn’t use.

Don’t move forward quite yet! Before you do anything else you should copy the emergency codes and put them somewhere safe.

These codes will let you access your Raspberry Pi — and turn off 2FA — if you lose your phone. Without them, you won’t be able to SSH into your Raspberry Pi if you lose or break the device you’re using to authenticate.

Next, before we continue with Google Authenticator on the Raspberry Pi, open the Google Authenticator app on your phone and tap the plus sign (+) at the top right, then tap on “Scan barcode.”

Your phone will ask you whether you want to allow the app access to your camera; you should say “Yes.” The camera view will open. Position the barcode squarely in the green box on the screen.

Scanning the QR code with the Google Authenticator app.

As soon as your phone app recognises the QR code it will add your new account, and it will start generating TOTP codes automatically.

The TOTP in Google Authenticator app.

Your phone will generate a new one-time password every thirty seconds. However, this code isn’t going to be all that useful until we finish what we were doing on your Raspberry Pi. Switch back to your terminal window and answer “Y” when asked whether Google Authenticator should update your .google_authenticator file.

Then answer “Y” to disallow multiple uses of the same authentication token, “N” to increasing the time skew window, and “Y” to rate limiting in order to protect against brute-force attacks.

You’re done here. Now all we have to do is enable 2FA.

Enabling two-factor authentication

We’re going to use Linux Pluggable Authentication Modules (PAM), which provides dynamic authentication support for applications and services, to add 2FA to SSH on Raspberry Pi.

Now we need to configure PAM to add 2FA:

$ sudo nano /etc/pam.d/sshd

Add auth required pam_google_authenticator.so to the top of the file. You can do this either above or below the line that says @include common-auth.

Editing /etc/pam.d/sshd.

As I prefer to be prompted for my verification code after entering my password, I’ve added this line after the @include line. If you want to be prompted for the code before entering your password you should add it before the @include line.

Now restart the SSH daemon:

$ sudo systemctl restart ssh

Next, open up a terminal window on your laptop and try and SSH into your Raspberry Pi.

Wrapping things up

If everything has gone to plan, when you SSH into the Raspberry Pi, you should be prompted for a TOTP after being prompted for your password.

SSH’ing into my Raspberry Pi.

You should go ahead and open Google Authenticator on your phone, and enter the six-digit code when prompted. Then you should be logged into your Raspberry Pi as normal.

You’ll now need your phone, and a TOTP, every time you ssh into, or scp to and from, your Raspberry Pi. But because of that, you’ve just given a huge boost to the security of your device.

Now you have the Google Authenticator app on your phone, you should probably start enabling 2FA for your important services and sites — like Google, Twitter, Amazon, and others — since most bigger sites, and many smaller ones, now support two-factor authentication.

The post Setting up two-factor authentication on your Raspberry Pi appeared first on Raspberry Pi.

Noticia Original

Help medical research with folding@home

Did you know: the first machine to break the exaflop barrier (one quintillion floating‑point operations per second) wasn’t a huge dedicated IBM supercomputer, but a bunch of interconnected PCs with ordinary CPUs and gaming GPUs.

With that in mind, welcome to the Folding@home project, which is targeting its enormous power at COVID-19 research. It’s effectively the world’s fastest supercomputer, and your PC can be a part of it.

COVID-19

The Folding@home project is now targeting COVID-19 research

Folding@home with Custom PC

Put simply, Folding@home runs hugely complicated simulations of protein molecules for medical research. They would usually take hundreds of years for a typical computer to process. However, by breaking them up into smaller work units, and farming them out to thousands of independent machines on the Internet, it’s possible to run simulations that would be impossible to run experimentally.

Back in 2004, Custom PC magazine started its own Folding@home team. The team is currently sitting at number 12 on the world leaderboard and we’re still going strong. If you have a PC, you can join us (or indeed any Folding@home team) and put your spare clock cycles towards COVID-19 research.

Get folding

Getting your machine folding is simple. First, download the client. Your username can be whatever you like, and you’ll need to put in team number 35947 to fold for the Custom PC & bit-tech team. If you want your PC to work on COVID-19 research, select ‘COVID-19’ in the ‘I support research finding’ pulldown menu.

Set your username and team number

Enter team number 35947 to fold for the Custom PC & bit-tech team

You’ll get the most points per Watt from GPU folding, but your CPU can also perform valuable research that can’t be done on your GPU. ‘There are actually some things we can do on CPUs that we can’t do on GPUs,’ said Professor Greg Bowman, Director of Folding@home, speaking to Custom PC in the latest issue.

‘With the current pandemic in mind, one of the things we’re doing is what are called “free energy calculations”. We’re simulating proteins with small molecules that we think might be useful starting points for developing therapeutics, for example.’

Select COVID-19 from the pulldown menu

If you want your PC to work on COVID-19 research, select ‘COVID-19’ in the ‘I support research finding’ pulldown menu

Bear in mind that enabling folding on your machine will increase power consumption. For reference, we set up folding on a Ryzen 7 2700X rig with a GeForce GTX 1070 Ti. The machine consumes around 70W when idle. That figure increases to 214W when folding on the CPU and around 320W when folding on the GPU as well. If you fold a lot, you’ll see an increase in your electricity bill, so keep an eye on it.

Folding on Arm?

Could we also see Folding@home running on Arm machines, such as Raspberry Pi? ‘Oh I would love to have Folding@home running on Arm,’ says Bowman. ‘I mean they’re used in Raspberry Pis and lots of phones, so I think this would be a great future direction. We’re actually in contact with some folks to explore getting Folding@home running on Arm in the near future.’

In the meantime, you can still recruit your Raspberry Pi for the cause by participating in Rosetta@home, a similar project also working to help the fight against COVID-19. For more information, visit the Rosetta@home website.

You’ll also find a full feature about Folding@home and its COVID-19 research in Issue 202 of Custom PC, available from the Raspberry Pi Press online store.

The post Help medical research with folding@home appeared first on Raspberry Pi.

Noticia Original

Making the best of it: online learning and remote teaching

As many educators across the world are currently faced with implementing some form of remote teaching during school closures, we thought this topic was ideal for the very first of our seminar series about computing education research.

Image by Mudassar Iqbal from Pixabay

Research into online learning and remote teaching

At the Raspberry Pi Foundation, we are hosting a free online seminar every second Tuesday to explore a wide variety of topics in the area of digital and computing education. Last Tuesday we were delighted to welcome Dr Lauren Margulieux, Assistant Professor of Learning Sciences at Georgia State University, USA. She shared her findings about different remote teaching approaches and practical tips for educators in the current crisis.

Lauren’s research interests are in educational technology and online learning, particularly for computing education. She focuses on designing instructions in a way that supports online students who do not necessarily have immediate access to a teacher or instructor to ask questions or overcome problem-solving impasses.

A vocabulary for online and blended learning

In non-pandemic situations, online instruction comes in many forms to serve many purposes, both in higher education and in K-12 (primary and secondary school). Much research has been carried out in how online learning can be used for successful learning outcomes, and in particular, how it can be blended with face-to-face (hybrid learning) to maximise the impact of both contexts.

In her seminar talk, Lauren helped us to understand the different ways in which online learning can take place, by sharing with us vocabulary to better describe different ways of learning with and through technology.

Lauren presented a taxonomy for classifying types of online and blended teaching and learning in two dimensions (shown in the image below). These are delivery type (technology or instructor), and whether content is received by learners, or actually being applied in the learning experience.

Lauren Margulieux seminar slide showing her taxonomy for different types of mixed student instruction

In Lauren’s words: “The taxonomy represents the four things that we control as instructors. We can’t control whether our students talk to each other or email each other, or ask each other questions […], therefore this taxonomy gives us a tool for defining how we design our classes.”

This taxonomy illustrates that there are a number of different ways in which the four types of instruction — instructor-transmitted, instructor-mediated, technology-transmitted, and technology-mediated — can be combined in a learning experience that uses both online and face-to-face elements.

Using her taxonomy in an examination (meta-analysis) of 49 studies relating to computer science teaching in higher education, Lauren found a range of different ways of mixing instruction, which are shown in the graph below.

  • Lecture hybrid means that the teaching is all delivered by the teacher, partly face-to-face and partly online.
  • Practice hybrid means that the learning is done through application of content and receiving feedback, which happens partly face-to-face or synchronously and partly online or asynchronously.
  • Replacement blend refers to instruction where lecture and practice takes place in a classroom and part of both is replaced with an online element.
  • Flipped blend instruction is where the content is transmitted through the use of technology, and the application of the learning is supported through an instructor. Again, the latter element can also take place online, but it is synchronous rather than asynchronous — as is the case in our current context.
  • Supplemental blend learning refers to instruction where content is delivered face-to-face, and then practice and application of content, together with feedback, takes place online — basically the opposite of the flipped blend approach.

Lauren Margulieux seminar slide showing learning outcomes of different types of mixed student instruction

Lauren’s examination found that the flipped blend approach was most likely to demonstrate improved learning outcomes. This is a useful finding for the many schools (and universities) that are experimenting with a range of different approaches to remote teaching.

Another finding of Lauren’s study was that approaches that involve the giving of feedback promoted improved learning. This has also been found in studies of assessment for learning, most notably by Black and Wiliam. As Lauren pointed out, the implication is that the reason blended and flipped learning approaches are the most impactful is that they include face-to-face or synchronous time for the educator to discuss learning with the students, including giving feedback.

Lauren’s tips for remote teaching

Of course we currently find ourselves in the midst of school closures across the world, so our only option in these circumstances is to teach online. In her seminar talk, Lauren also included some tips from her own experience to help educators trying to support their students during the current crisis:

  • Align learning objectives, instruction, activities, assignments, and assessments.
  • Use good equipment: headphones to avoid echo and a good microphone to improve clarity and reduce background noise.
  • Be consistent in disseminating information, as there is a higher barrier to asking questions.
  • Highlight important points verbally and visually.
  • Create ways for students to talk with each other, through discussions, breakout rooms, opportunities to talk when you aren’t present, etc.
  • Use video when possible while talking with your students.
    Give feedback frequently, even if only very brief.

Although Lauren’s experience is primarily from higher education (post-18), this advice is also useful for K-12 educators.

What about digital equity and inclusion?

All our seminars include an opportunity to break out into small discussion groups, followed by an opportunity to ask questions of the speaker. We had an animated follow-up discussion with Lauren, with many questions focused on issues of representation and inclusion. Some questions related to the digital divide and how we could support learners who didn’t have access to the technology they need. There were also questions from breakout groups about the participation of groups that are typically under-represented in computing education in online learning experiences, and accessibility for those with special educational needs and disabilities (SEND). While there is more work needed in this area, there’s also no one-size-fits-all approach to working with students with special needs, whether that’s due to SEND or to material resources (e.g. access to technology). What works for one student based on their needs might be entirely ineffective for others. Overall, the group concluded that there was a need for much more research in these areas, particularly at K-12 level.

Much anxiety has been expressed in the media, and more formally through bodies such as the World Economic Forum and UNESCO, about the potential long-lasting educational impact of the current period of school closures on disadvantaged students and communities. Research into the most inclusive way of supporting students through remote teaching will help here, as will the efforts of governments, charities, and philanthropists to provide access to technology to learners in need.

At the Raspberry Pi Foundation, we offer lots of free resources for students, educators, and parents to help them engage with computing education during the current school closures and beyond.

How should the education community move forward?

Lauren’s seminar made it clear to me that she was able to draw on decades of research studies into online and hybrid learning, and that we should take lessons from these before jumping to conclusions about the future. In both higher education (tertiary, university) and K-12 (primary, secondary) education contexts, we do not yet know the educational impact of the teaching experiments we have found ourselves engaging in at short notice. As Charles Hodges and colleagues wrote recently in Educause, what we are currently engaging in can only really be described as emergency remote teaching, which stands in stark contrast to planned online learning that is designed much more carefully with pedagogy, assessment, and equity in mind. We should ensure we learn lessons from the online learning research community rather than making it up as we go along.

Today many writers are reflecting on the educational climate we find ourselves in and on how it will impact educational policy and decision-making in the future. For example, an article from the Brookings Institution suggests that the experiences of home teaching and learning that we’ve had in the last couple of months may lead to both an increased use of online tools at home, an increase in home schooling, and a move towards competency-based learning. An article by Jo Johnson (President’s Professorial Fellow at King’s College London) on the impact of the pandemic on higher education, suggests that traditional universities will suffer financially due to a loss of income from international students less likely to travel to universities in the UK, USA, and Australia, but that the crisis will accelerate take-up of online, distance-learning, and blended courses for far-sighted and well-organised institutions that are ready to embrace this opportunity, in sum broadening participation and reducing elitism. We all need to be ready and open to the ways in which online and hybrid learning may change the academic world as we know it.

Next up in our seminar series

If you missed this seminar, you can find Lauren’s presentation slides and a recording of her talk on our seminars page.

Next Tuesday, 19 May at 17:00–18:00 BST, we will welcome Juan David Rodríguez from the Instituto Nacional de Tecnologías Educativas y de Formación del Profesorado (INTEF) in Spain. His seminar talk will be about learning AI at school, and about a new tool called LearningML. To join the seminar, simply sign up with your name and email address and we’ll email the link and instructions. If you attended Lauren’s seminar, the link remains the same.

The post Making the best of it: online learning and remote teaching appeared first on Raspberry Pi.

Noticia Original

Fix slow Nintendo Switch play with your Raspberry Pi

Is your Nintendo Switch behaving more like a Nintendon’t due to poor connectivity? Well, TopSpec (hosted Chris Barlas) has shared a brilliant Raspberry Pi-powered hack on YouTube to help you fix that.

 

Here’s the problem…

When you play Switch online, the servers are peer-to-peer. The Switches decide which Switch’s internet connection is more stable, and that player becomes the host.

However, some users have found that poor internet performance causes game play to lag. Why? It’s to do with the way data is shared between the Switches, as ‘packets’.

 

What are packets?

Think of it like this: 200 postcards will fit through your letterbox a few at a time, but one big file wrapped as a parcel won’t. Even though it’s only one, it’s too big to fit. So instead, you could receive all the postcards through the letterbox and stitch them together once they’ve been delivered.

Similarly, a packet is a small unit of data sent over a network, and packets are reassembled into a whole file, or some other chunk of related data, by the computer that receives them.

Problems arise if any of the packets containing your Switch game’s data go missing, or arrive late. This will cause the game to pause.

Fix Nintendo Switch Online Lag with a Raspberry Pi! (Ethernet Bridge)

Want to increase the slow internet speed of your Nintendo Switch? Having lag in games like Smash, Mario Maker, and more? Well, we decided to try out a really…

Chris explains that games like Call of Duty have code built in to mitigate the problems around this, but that it seems to be missing from a lot of Switch titles.

 

How can Raspberry Pi help?

The advantage of using Raspberry Pi is that it can handle wireless networking more reliably than Nintendo Switch on its own. Bring the two devices together using a LAN adapter, and you’ve got a perfect pairing. Chris reports speeds up to three times faster using this hack.

A Nintendo Switch > LAN adaptor > Raspberry Pi

He ran a download speed test using a Nintendo Switch by itself, and then using a Nintendo Switch with a LAN adapter plugged into a Raspberry Pi. He found the Switch connected to the Raspberry Pi was quicker than the Switch on its own.

At 02mins 50secs of Chris’ video, he walks through the steps you’ll need to take to get similar results.

We’ve handily linked to some of the things Chris mentions here:

 

 

To test his creation, Chris ran a speed test downloading a 10GB game, Pokémon Shield, using three different connection solutions. The Raspberry Pi hack came out “way ahead” of the wireless connection relying on the Switch alone. Of course, plugging your Switch directly into your internet router would get the fastest results of all, but routers have a habit of being miles away from where you want to sit and play.

Have a look at TopSpec on YouTube for more great videos.

The post Fix slow Nintendo Switch play with your Raspberry Pi appeared first on Raspberry Pi.

Noticia Original

Go back in time with a Raspberry Pi-powered radio

Take a musical trip down memory lane all the way back to the 1920s.

Sick of listening to the same dozen albums on repeat, or feeling stifled by the funnel of near-identical YouTube playlist rabbit holes? If you’re looking to broaden your musical horizons and combine that quest with a vintage-themed Raspberry Pi–powered project, here’s a great idea…

Alex created a ‘Radio Time Machine’ that covers 10 decades of music, from the 1920s up to the 2020s. Each decade has its own Spotify playlist, with hundreds of songs from that decade played randomly. This project with the look of a vintage radio offers a great, immersive learning experience and should throw up tonnes of musical talent you’ve never heard of.

In the comments section of their reddit post, Alex explained that replacing the screen of the vintage shell they housed the tech in was the hardest part of the build. On the screen, each decade is represented with a unique icon, from a gramophone, through to a cassette tape and the cloud. Here’s a closer look at it:

Now let’s take a look at the hardware and software it took to pull the whole project together…

Hardware:

  • Vintage Bluetooth radio (Alex found this affordable one on Amazon)
  • Raspberry Pi 4
  • Arduino Nano
  • 2 RGB LEDs for the dial
  • 1 button (on the back) to power on/off (long press) or play the next track (short press)

The Raspberry Pi 4 audio output is connected to the auxiliary input on the radio (3.5mm jack).

Software:

    • Mopidy library (Spotify)
    • Custom NodeJS app with JohnnyFive library to read the button and potentiometer values, trigger the LEDs via the Arduino, and load the relevant playlists with Mopidy

Take a look at the video on reddit to hear the Radio Time Machine in action. The added detail of the white noise that sounds as the dial is turned to switch between decades is especially cool.

How do you find ten decades of music?

Alex even went to the trouble of sharing each decade’s playlist in the comments of their original reddit post.

Here you go:

1920s
1930s
1940s
1950s
1960s
1970s
1980s
1990s
2000s
2010s

Comment below to tell us which decade sounds the coolest to you. We’re nineties kids ourselves!

The post Go back in time with a Raspberry Pi-powered radio appeared first on Raspberry Pi.

Noticia Original

Retro Nixie tube lights get smart

Nixie tubes: these electronic devices, which can display numerals or other information using glow discharge, made their first appearance in 1955, and they remain popular today because of their cool, vintage aesthetic. Though lots of companies manufactured these items back in the day, the name ‘Nixie’ is said to derive from a Burroughs corporation’s device named NIX I, an abbreviation of ‘Numeric Indicator eXperimental No. 1’.

We liked this recent project shared on reddit, where user farrp2011 used Raspberry Pi  to make his Nixie tube display smart enough to tell the time.

A still from Farrp2011’s video shows he’s linked the bulb displays up to tell the time

Farrp2011’s set-up comprises six Nixie tubes controlled by Raspberry Pi 3, along with eight SN74HC shift registers to turn the 60 transistors on and off that ground the pin for the digits to be displayed on the Nixie tubes. Sounds complicated? Well, that’s why farrp2011 is our favourite kind of DIY builder — they’ve put all the code for the project on GitHub.

Tales of financial woe from users trying to source their own Nixie tubes litter the comments section on the reddit post, but farrp2011 says they were able to purchase the ones used in this project for about about $15 each on eBay. Here’s a closer look at the bulbs, courtesy of a previous post by farrp2011 sharing an earlier stage of project…

Farrp2011 got started with one, then two Nixie bulbs before building up to six for the final project

Digging through the comments, we learned that for the video, farrp2011 turned their house lights off to give the Nixie tubes a stronger glow. So the tubes are not as bright in real life as they appear. We also found out that the drop resistor is 22k, with 170V as the supply. Another comments section nugget we liked was the name of the voltage booster boards used for each bulb: “Pile o’Poo“.

Upcoming improvements farrp201 has planned include displaying the date, temperature, and Bitcoin exchange rate, but more suggestions are welcome. They’re also going to add some more capacitors to help with a noise problem and remove the need for the tubes to be turned off before changing the display.

And for extra nerd-points, we found this mesmerising video from Dalibor Farný showing the process of making Nixie tubes:

The post Retro Nixie tube lights get smart appeared first on Raspberry Pi.

Noticia Original

Code Robotron: 2084’s twin-stick action | Wireframe #38

News flash! Before we get into our Robotron: 2084 code, we have some important news to share about Wireframe: as of issue 39, the magazine will be going monthly.

The new 116-page issue will be packed with more in-depth features, more previews and reviews, and more of the guides to game development that make the magazine what it is. The change means we’ll be able to bring you new subscription offers, and generally make the magazine more sustainable in a challenging global climate.

As for existing subscribers, we’ll be emailing you all to let you know how your subscription is changing, and we’ll have some special free issues on offer as a thank you for your support.

The first monthly issue will be out on 4 June, and subsequent editions will be published on the first Thursday of every month after that. You’ll be able to order a copy online, or you’ll find it in selected supermarkets and newsagents if you’re out shopping for essentials.

We now return you to our usual programming…

Move in one direction and fire in another with this Python and Pygame re-creation of an arcade classic. Raspberry Pi’s own Mac Bowley has the code.

Robotron: 2084 is often listed on ‘best game of all time’ lists, and has been remade and re-released for numerous systems over the years.

Robotron: 2084

Released back in 1982, Robotron: 2084 popularised the concept of the twin-stick shooter. It gave players two joysticks which allowed them to move in one direction while also shooting at enemies in another. Here, I’ll show you how to recreate those controls using Python and Pygame. We don’t have access to any sticks, only a keyboard, so we’ll be using the arrow keys for movement and WASD to control the direction of fire.

The movement controls use a global variable, a few if statements, and two built-in Pygame functions: on_key_down and on_key_up. The on_key_down function is called when a key on the keyboard is pressed, so when the player presses the right arrow key, for example, I set the x direction of the player to be a positive 1. Instead of setting the movement to 1, instead, I’ll add 1 to the direction. The on_key_down function is called when a button’s released. A key being released means the player doesn’t want to travel in that direction anymore and so we should do the opposite of what we did earlier – we take away the 1 or -1 we applied in the on_key_up function.

We repeat this process for each arrow key. Moving the player in the update() function is the last part of my movement; I apply a move speed and then use a playArea rect to clamp the player’s position.

The arena background and tank sprites were created in Piskel. Separate sprites for the tank allow the turret to rotate separately from the tracks.

Turn and fire

Now for the aiming and rotating. When my player aims, I want them to set the direction the bullets will fire, which functions like the movement. The difference this time is that when a player hits an aiming key, I set the direction directly rather than adjusting the values. If my player aims up, and then releases that key, the shooting will stop. Our next challenge is changing this direction into a rotation for the turret.

Actors in Pygame can be rotated in degrees, so I have to find a way of turning a pair of x and y directions into a rotation. To do this, I use the math module’s atan2 function to find the arc tangent of two points. The function returns a result in radians, so it needs to be converted. (You’ll also notice I had to adjust mine by 90 degrees. If you want to avoid having to do this, create a sprite that faces right by default.)

To fire bullets, I’m using a flag called ‘shooting’ which, when set to True, causes my turret to turn and fire. My bullets are dictionaries; I could have used a class, but the only thing I need to keep track of is an actor and the bullet’s direction.

Here’s Mac’s code snippet, which creates a simple twin-stick shooting mechanic in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, go here.

You can look at the update function and see how I’ve implemented a fire rate for the turret as well. You can edit the update function to take a single parameter, dt, which stores the time since the last frame. By adding these up, you can trigger a bullet at precise intervals and then reset the timer.

This code is just a start – you could add enemies and maybe other player weapons to make a complete shooting experience.

Get your copy of Wireframe issue 38

You can read more features like this one in Wireframe issue 38, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 38 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code Robotron: 2084’s twin-stick action | Wireframe #38 appeared first on Raspberry Pi.

Noticia Original

Learn at home: a guide for parents #2

With millions of schools still in lockdown, parents have been telling us that they need help to support their children with learning computing at home. As well as providing loads of great content for young people, we’ve been working on support tutorials specifically for parents who want to understand and learn about the programmes used in schools and our resources.

If you don’t know your Scratch from your Trinket and your Python, we’ve got you!

Glen, Web Developer at the Raspberry Pi Foundation, and Maddie, aged 8

 

What are Python and Trinket all about?

In our last blog post for parents, we talked to you about Scratch, the programming language used in most primary schools. This time Mark, Youth Programmes Manager at the Raspberry Pi Foundation, takes you through how to use Trinket. Trinket is a free online platform that lets you write and run your code in any web browser. This is super useful because it means you don’t have to install any new software.

A parents’ introduction to Trinket

Sign up to our regular parents’ newsletter to receive regular, FREE tutorials, tips & fun projects for young people of all levels of experience: http://rpf.i…

Trinket also lets you create public web pages and projects that can be viewed by anyone with the link to them. That means your child can easily share their coding creation with others, and for you that’s a good opportunity to talk to them about staying safe online and not sharing any personal information.

Lincoln, aged 10

Getting to know Python

We’ve also got an introduction to Python for you, from Mac, a Learning Manager on our team. He’ll guide you through what to expect from Python, which is a widely used text-based programming language. For many learners, Python is their first text-based language, because it’s very readable, and you can get things done with fewer lines of code than in many other programming languages. In addition, Python has support for ‘Turtle’ graphics and other features that make coding more fun and colourful for learners. Turtle is simply a Python feature that works like a drawing board, letting you control a turtle to draw anything you like using code.

A parents’ introduction to Python

Sign up to our regular parents’ newsletter to receive regular, FREE tutorials, tips & fun projects for young people of all levels of experience: http://rpf.i…

Why not try out Mac’s suggestions of Hello world, Countdown timer, and Outfit recommender for  yourself?

Python is used in lots of real-world software applications in industries such as aerospace, retail banking, insurance and healthcare, so it’s very useful for your children to learn it!

Parent diary: juggling homeschooling and work

Olympia is Head of Youth Programmes at the Raspberry Pi Foundation and also a mum to two girls aged 9 and 11. She is currently homeschooling them as well as working (and hopefully having the odd evening to herself!). Olympia shares her own experience of learning during lockdown and how her family are adapting to their new routine.

Parent diary: Juggling homeschooling and work

Olympia Brown, Head of Youth Partnerships at the Raspberry Pi Foundation shares her experience of homeschooling during the lockdown, and how her family are a…

Digital Making at Home

To keep young people entertained and learning, we launched our Digital Making at Home series, which is free and accessible to everyone. New code-along videos are released every Monday, with different themes and projects for all levels of experience.

Code along live with the team on Wednesday 6 May at 14:00 BST / 9:00 EDT for a special session of Digital Making at Home

Sarah and Ozzy, aged 13

We want your feedback

We’ve been asking parents what they’d like to see as part of our initiative to support young people and parents. We’ve had some great suggestions so far! If you’d like to share your thoughts, you can email us at parents@raspberrypi.org.

Sign up for our bi-weekly emails, tailored to your needs

Sign up now to start receiving free activities suitable to your child’s age and experience level, straight to your inbox. And let us know what you as a parent or guardian need help with, and what you’d like more or less of from us. 

PS: All of our resources are completely free. This is made possible thanks to the generous donations of individuals and organisations. Learn how you can help too!

 

The post Learn at home: a guide for parents #2 appeared first on Raspberry Pi.

Noticia Original

How to work from home with Raspberry Pi | The Magpi 93

If you find yourself working or learning, or simply socialising from home, Raspberry Pi can help with everything from collaborative productivity to video conferencing. Read more in issue #92 of The MagPi, out now.

01 Install the camera

If you’re using a USB webcam, you can simply insert it into a USB port on Raspberry Pi. If you’re using a Raspberry Pi Camera Module, you’ll need to unpack it, then find the ‘CAMERA’ port on the top of Raspberry Pi – it’s just between the second micro-HDMI port and the 3.5mm AV port. Pinch the shorter sides of the port’s tab with your nails and pull it gently upwards. With Raspberry Pi positioned so the HDMI ports are at the bottom, insert one end of the camera’s ribbon cable into the port so the shiny metal contacts are facing the HDMI port. Hold the cable in place, and gently push the tab back home again.

If the Camera Module doesn’t have the ribbon cable connected, repeat the process for the connector on its underside, making sure the contacts are facing downwards towards the module. Finally, remove the blue plastic film from the camera lens.

02 Enable Camera Module access

Before you can use your Raspberry Pi Camera Module, you need to enable it in Raspbian. If you’re using a USB webcam, you can skip this step. Otherwise, click on the raspberry menu icon in Raspbian, choose Preferences, then click on Raspberry Pi Configuration.

When the tool loads, click on the Interfaces tab, then click on the ‘Enabled’ radio button next to Camera. Click OK, and let Raspberry Pi reboot to load your new settings. If you forget this step, Raspberry Pi won’t be able to communicate with the Camera Module.

03 Set up your microphone

If you’re using a USB webcam, it may come with a microphone built-in; otherwise, you’ll need to connect a USB headset, a USB microphone and separate speakers, or a USB sound card with analogue microphone and speakers to Raspberry Pi. Plug the webcam into one of Raspberry Pi’s USB 2.0 ports, furthest away from the Ethernet connector and marked with black plastic inners.

Right-click on the speaker icon at the top-right of the Raspbian desktop and choose Audio Inputs. Find your microphone or headset in the list, then click it to set it as the default input. If you’re using your TV or monitor’s speakers, you’re done; if you’re using a headset or separate speakers, right-click on the speaker icon and choose your device from the Audio Outputs menu as well.

04 Set access permissions

Click on the Internet icon next to the raspberry menu to load the Chromium web browser. Click in the address box and type hangouts.google.com. When the page loads, click ‘Sign In’ and enter your Google account details; if you don’t already have a Google account, you can sign up for one free of charge.

When you’ve signed in, click Video Call. You’ll be prompted to allow Google Hangouts to access both your microphone and your camera. Click Allow on the prompt that appears. If you Deny access, nobody in the video chat will be able to see or hear you!

05 Invite friends or join a chat

You can invite friends to your video chat by writing their email address in the Invite People box, or copying the link and sending it via another messaging service. They don’t need their own Raspberry Pi to participate – you can use Google Hangouts from a laptop, desktop, smartphone, or tablet. If someone has sent you a link to their video chat, open the message on Raspberry Pi and simply click the link to join automatically.

You can click the microphone or video icons at the bottom of the window to temporarily disable the microphone or camera; click the red handset icon to leave the call. You can click the three dots at the top-right to access more features, including switching the chat to full-screen view and sharing your screen – which will allow guests to see what you’re doing on Raspberry Pi, including any applications or documents you have open.

06 Adjust microphone volume

If your microphone is too quiet, you’ll need to adjust the volume. Click the Terminal icon at the upper-left of the screen, then type alsamixer followed by the ENTER key. This loads an audio mixing tool; when it opens, press F4 to switch to the Capture tab and use the up-arrow and down-arrow keys on the keyboard to increase or decrease the volume. Try small adjustments at first; setting the capture volume too high can cause the audio to ‘clip’, making you harder to hear. When finished, press CTRL+C to exit AlsaMixer, then click the X at the top-right of the Terminal to close it.

Adjust your audio volume settings with the AlsaMixer tool

Work online with your team

Just because you’re not shoulder-to-shoulder with colleagues doesn’t mean you can’t collaborate, thanks to these online tools.

Google Docs

Google Docs is a suite of online productivity tools linked to the Google Drive cloud storage platform, all accessible directly from your browser. Open the browser and go to drive.google.com, then sign in with your Google account – or sign up for a new account if you don’t already have one – for 15GB of free storage plus access to the word processor Google Docs, spreadsheet Google Sheets, presentation tool Google Slides, and more. Connect with colleagues and friends to share files or entire folders, and collaborate within documents with simultaneous multi-user editing, comments, and change suggestions.

Slack

Designed for business, Slack is a text-based instant messaging tool with support for file transfer, rich text, images, video, and more. Slack allows for easy collaboration in Teams, which are then split into multiple channels or rooms – some for casual conversation, others for more focused discussion. If your colleagues or friends already have a Slack team set up, ask them to send you an invite; if not, you can head to app.slack.com and set one up yourself for free.

Discord

Built more for casual use, Discord offers live chat functionality. While the dedicated Discord app includes voice chat support, this is not yet supported on Raspberry Pi – but you can still use text chat by opening the browser, going to discord.com, and choosing the ‘Open Discord in your browser’ option. Choose a username, read and agree to the terms of service, then enter an email address and password to set up your own free Discord server. Alternatively, if you know someone on Discord already, ask them to send you an invitation to access their server.

Firefox Send

If you need to send a document, image, or any other type of file to someone who isn’t on Google Drive, you can use Firefox Send – even if you’re not using the Firefox browser. All files transferred via Firefox Send are encrypted, and can be protected with an optional password, and are automatically deleted after a set number of downloads or length of time. Simply open the browser and go to send.firefox.com; you can send files up to 1GB without an account, or sign up for a free Firefox account to increase the limit to 2.5GB.

GitHub

For programmers, GitHub is a lifesaver. Based around the Git version control system, GitHub lets teams work on a project regardless of distance using repositories of source code and supporting files. Each programmer can have a local copy of the program files, work on them independently, then submit the changes for inclusion in the master copy – complete with the ability to handle conflicting changes. Better still, GitHub offers additional collaboration tools including issue tracking. Open the browser and go to github.com to sign up, or sign in if you have an existing account, and follow the getting started guide on the site.

Read The MagPi for free!

Find more fantastic projects, tutorials, and reviews in The MagPi #93, out now! You can get The MagPi #92 online at our store, or in print from all good newsagents and supermarkets. You can also access The MagPi magazine via our Android and iOS apps.

Don’t forget our super subscription offers, which include a free gift of a Raspberry Pi Zero W when you subscribe for twelve months.

And, as with all our Raspberry Pi Press publications, you can download the free PDF from our website.

The post How to work from home with Raspberry Pi | The Magpi 93 appeared first on Raspberry Pi.

Noticia Original

An open source camera stack for Raspberry Pi using libcamera

Since we released the first Raspberry Pi camera module back in 2013, users have been clamouring for better access to the internals of the camera system, and even to be able to attach camera sensors of their own to the Raspberry Pi board. Today we’re releasing our first version of a new open source camera stack which makes these wishes a reality.

(Note: in what follows, you may wish to refer to the glossary at the end of this post.)

We’ve had the building blocks for connecting other sensors and providing lower-level access to the image processing for a while, but Linux has been missing a convenient way for applications to take advantage of this. In late 2018 a group of Linux developers started a project called libcamera to address that. We’ve been working with them since then, and we’re pleased now to announce a camera stack that operates within this new framework.

Here’s how our work fits into the libcamera project.

We’ve supplied a Pipeline Handler that glues together our drivers and control algorithms, and presents them to libcamera with the API it expects.

Here’s a little more on what this has entailed.

V4L2 drivers

V4L2 (Video for Linux 2) is the Linux kernel driver framework for devices that manipulate images and video. It provides a standardised mechanism for passing video buffers to, and/or receiving them from, different hardware devices. Whilst it has proved somewhat awkward as a means of driving entire complex camera systems, it can nonetheless provide the basis of the hardware drivers that libcamera needs to use.

Consequently, we’ve upgraded both the version 1 (Omnivision OV5647) and version 2 (Sony IMX219) camera drivers so that they feature a variety of modes and resolutions, operating in the standard V4L2 manner. Support for the new Raspberry Pi High Quality Camera (using the Sony IMX477) will be following shortly. The Broadcom Unicam driver – also V4L2‑based – has been enhanced too, signalling the start of each camera frame to the camera stack.

Finally, dumping raw camera frames (in Bayer format) into memory is of limited value, so the V4L2 Broadcom ISP driver provides all the controls needed to turn raw images into beautiful pictures!

Configuration and control algorithms

Of course, being able to configure Broadcom’s ISP doesn’t help you to know what parameters to supply. For this reason, Raspberry Pi has developed from scratch its own suite of ISP control algorithms (sometimes referred to generically as 3A Algorithms), and these are made available to our users as well. Some of the most well known control algorithms include:

  • AEC/AGC (Auto Exposure Control/Auto Gain Control): this monitors image statistics into order to drive the camera exposure to an appropriate level.
  • AWB (Auto White Balance): this corrects for the ambient light that is illuminating a scene, and makes objects that appear grey to our eyes come out actually grey in the final image.

But there are many others too, such as ALSC (Auto Lens Shading Correction, which corrects vignetting and colour variation across an image), and control for noise, sharpness, contrast, and all other aspects of image processing. Here’s how they work together.

The control algorithms all receive statistics information from the ISP, and cooperate in filling in metadata for each image passing through the pipeline. At the end, the metadata is used to update control parameters in both the image sensor and the ISP.

Previously these functions were proprietary and closed source, and ran on the Broadcom GPU. Now, the GPU just shovels pixels through the ISP hardware block and notifies us when it’s done; practically all the configuration is computed and supplied from open source Raspberry Pi code on the ARM processor. A shim layer still exists on the GPU, and turns Raspberry Pi’s own image processing configuration into the proprietary functions of the Broadcom SoC.

To help you configure Raspberry Pi’s control algorithms correctly for a new camera, we include a Camera Tuning Tool. Or if you’d rather do your own thing, it’s easy to modify the supplied algorithms, or indeed to replace them entirely with your own.

Why libcamera?

Whilst ISP vendors are in some cases contributing open source V4L2 drivers, the reality is that all ISPs are very different. Advertising these differences through kernel APIs is fine – but it creates an almighty headache for anyone trying to write a portable camera application. Fortunately, this is exactly the problem that libcamera solves.

We provide all the pieces for Raspberry Pi-based libcamera systems to work simply “out of the box”. libcamera remains a work in progress, but we look forward to continuing to help this effort, and to contributing an open and accessible development platform that is available to everyone.

Summing it all up

So far as we know, there are no similar camera systems where large parts, including at least the control (3A) algorithms and possibly driver code, are not closed and proprietary. Indeed, for anyone wishing to customise a camera system – perhaps with their own choice of sensor – or to develop their own algorithms, there would seem to be very few options – unless perhaps you happen to be an extremely large corporation.

In this respect, the new Raspberry Pi Open Source Camera System is providing something distinctly novel. For some users and applications, we expect its accessible and non-secretive nature may even prove quite game-changing.

What about existing camera applications?

The new open source camera system does not replace any existing camera functionality, and for the foreseeable future the two will continue to co-exist. In due course we expect to provide additional libcamera-based versions of raspistill, raspivid and PiCamera – so stay tuned!

Where next?

If you want to learn more about the libcamera project, please visit https://libcamera.org.

To try libcamera for yourself with a Raspberry Pi, please follow the instructions in our online documentation, where you’ll also find the full Raspberry Pi Camera Algorithm and Tuning Guide.

If you’d like to know more, and can’t find an answer in our documentation, please go to the Camera Board forum. We’ll be sure to keep our eyes open there to pick up any of your questions.

Acknowledgements

Thanks to Naushir Patuck and Dave Stevenson for doing all the really tricky bits (lots of V4L2-wrangling).

Thanks also to the libcamera team (Laurent Pinchart, Kieran Bingham, Jacopo Mondi and Niklas Söderlund ) for all their help in making this project possible.

 

Glossary

3A, 3A Algorithms: refers to AEC/AGC (Auto Exposure Control/Auto Gain Control), AWB (Auto White Balance) and AF (Auto Focus) algorithms, but may implicitly cover other ISP control algorithms. Note that Raspberry Pi does not implement AF (Auto Focus), as none of our supported camera modules requires it
AEC: Auto Exposure Control
AF: Auto Focus
AGC: Auto Gain Control
ALSC: Auto Lens Shading Correction, which corrects vignetting and colour variations across an image. These are normally caused by the type of lens being used and can vary in different lighting conditions
AWB: Auto White Balance
Bayer: an image format where each pixel has only one colour component (one of R, G or B), creating a sort of “colour mosaic”. All the missing colour values must subsequently be interpolated. This is a raw image format meaning that no noise, sharpness, gamma, or any other processing has yet been applied to the image
CSI-2: Camera Serial Interface (version) 2. This is the interface format between a camera sensor and Raspberry Pi
GPU: Graphics Processing Unit. But in this case it refers specifically to the multimedia coprocessor on the Broadcom SoC. This multimedia processor is proprietary and closed source, and cannot directly be programmed by Raspberry Pi users
ISP: Image Signal Processor. A hardware block that turns raw (Bayer) camera images into full colour images (either RGB or YUV)
Raw: see Bayer
SoC: System on Chip. The Broadcom processor at the heart of all Raspberry Pis
Unicam: the CSI-2 receiver on the Broadcom SoC on the Raspberry Pi. Unicam receives pixels being streamed out by the image sensor
V4L2: Video for Linux 2. The Linux kernel driver framework for devices that process video images. This includes image sensors, CSI-2 receivers, and ISPs

The post An open source camera stack for Raspberry Pi using libcamera appeared first on Raspberry Pi.

Noticia Original