News page

The most recent news from our NCC.

LUMI-Q consortium one step closer to its quantum computer

Signing hosting agreement for the quantum computer of the European LUMI-Q

The hosting agreement was officially signed for the acquisition and operation of the quantum computer of the European LUMI-Q consortium at the IT4Innovations National Supercomputing Centre in Ostrava, Czechia. The signed agreement with the EuroHPC Joint Undertaking governs the roles, rights, and obligations of the parties. The procurement process for the quantum computer will be managed directly by the EuroHPC JU and launched shortly. The investment costs for the procurement of the quantum computer are planned to amount to a maximum of EUR 7 million. They will be 50% co-financed by the EuroHPC JU budget under the Digital Europe Programme (DEP) and 50% from the contributions of the member countries of the LUMI-Q consortium.

"Signing the agreement to host the LUMI-Q quantum computer in the Czech Republic is an important milestone not only for the Czech research community in the field of quantum computers and algorithms but also represents a significant step towards developing European quantum computing resources. Together with other European partners, we are creating an important element of future scientific progress in quantum computing and its applications," said Branislav Jansik, IT4Innovations` Supercomputing Services Director and the coordinator of the LUMI-Q project.

The LUMI-Q consortium, which brings together 9 European countries: Belgium, Czechia, Denmark, Finland, Germany, the Netherlands, Norway, Poland, and Sweden aims to provide academic and industrial users with a quantum computer based on superconducting qubits with a star-shaped topology. Its advantage is that it minimises the number of so-called swap operations and thereby enables the execution of very complex quantum algorithms. The assumption is that it will contain at least 12 qubits. This quantum computer will be directly connected to the EuroHPC supercomputer KAROLINA, located at IT4Innovations in Ostrava. In addition, the plan is to connect it to other EuroHPC supercomputers, especially those hosted by other members of the LUMI-Q consortium, such as the most powerful European supercomputer LUMI, and the supercomputer Helios which will be located in Krakow, Poland.

Quantum computers have the revolutionary potential to bring a new approach to computing and solving computationally extremely complex problems. Unlike classical computers that work with binary bits, quantum computers use quantum bits (qubits) to perform parallel computations and manipulate quantum phenomena such as superposition and quantum entanglement. This gives them a unique ability to efficiently solve some problems that are too difficult for classical computers efficiently. These may include optimisation problems for solving the electronic structure of new materials or traffic and port management logistics. Several other applications are currently being developed and can be found in almost all scientific and computational domains, such as the automotive industry, the development of new electric batteries, energy, finance, pharmaceutics, quantum chemistry, cryptography, quantum machine learning and many more. Quantum computers have the potential to dramatically shape scientific research and technological development in all fields, from physics and chemistry to artificial intelligence and bioinformatics. 

Boilerplate 

The LUMI-Q consortium will provide a European-wide quantum computing environment integrated with the EuroHPC infrastructure. The proposed concept allows the integration of the targeted EuroHPC quantum computer into multiple EuroHPC supercomputers, including KAROLINA in Czechia, LUMI in Finland, and EHPCPL in Poland. The LUMI-Q consortium brings together 9 European countries: Belgium, Czechia, Denmark, Finland, Germany, the Netherlands, Norway, Poland, and Sweden.

LUMI-Q Consortium partners 

  • Coordinator: VSB – Technical University of Ostrava, IT4Innovations National Supercomputing Center, Czechia
  • CSC – IT Center for Science, Finland
  • VTT Technical Research Centre of Finland Ltd, Finland
  • Chalmers University of Technology, Sweden
  • Danish Technical University, Denmark
  • Academic Computer Centre Cyfronet AGH, Poland
  • Nicolaus Copernicus Astronomical Center, Poland
  • Nordic e-Infrastructure Collaboration,
  • Sigma2 AS, Norway
  • Simula Research Lab, Norway
  • SINTEF AS, Norway
  • Deutsches Zentrum für Luft- und Raumfahrt, Germany
  • University of Hasselt, Belgium
  • TNO Netherlands Organisation for Applied Scientific Research, the Netherlands
  • SURF BV, the Netherlands

Interview: Preparing Students for the Future - Prof. Dirk Valkenborg's Approach to Supercomputing Education

Prof. Dirk Valkenborg

Prof. Dirk Valkenborg is the Programme director of the data science trajectory in the Master of Statistics & Data Science at UHasselt. He teaches a course on Machine Learning in this programme, and during this course, his students use the supercomputing infrastructure of The Flemish Computer Center. We sat down with Prof. Valkenborg to ask him why he is so keen on getting his students enthusiastic about working with a supercomputer.

Why did you introduce supercomputing to your students?


In the Master of Statistics & Data Science, particularly within the Data Science track, students must handle large datasets. So, we needed a solution to provide our students with more computational power. We also wanted to teach them some basic skills in supercomputing as preparation for their later professional life. Several options were available for consideration.
One possibility was extending our local infrastructure and granting students access to these facilities. But this extension would be very costly and require updating hardware and continuous maintenance, so this was not feasible. Alternatively, we could use a platform like Google Colab or Kaggle, which also has limitations. When training is required on a larger dataset, you get thrown out of the service because it takes too much time.


We contacted the VSC to check whether there was any possibility of giving our students access to the VSC computing facilities, and this arrangement was feasible. Of course, the entry-level here is higher. The other two options had a more accessible entry: run a Jupiter notebook on a GPU via Colab or Kaggle. When opting for local infrastructure facilities, we would give the students a bit more computing power. Still, their mode of operation would remain the as using their regular laptop. Students would not think about parallelisation, making data available, job submission etc. Using VSC infrastructure, forces them to do so. 
Also local infrastructure does not scale, and the VSC infrastructure does.

You decided to introduce the students to supercomputing in the first semester of the second year of the Master's. Why?

In our master's path, the focus on the master thesis project is only in the last three months of the academic year. Getting students on board with supercomputing during that short period is challenging. Therefore, giving them a taste of the VSC capacity earlier in their educational journey would be much more convenient.
I teach a Machine Learning course in the second year of the Master of Statistics & Data Science. In this course, a relatively large dataset needs to be analysed, and this also can perfectly be done using supercomputing infrastructure. So, we added this as an extra competence for this course. 

Prof. Bex and I organise two sessions in the Machine Learning course to gently introduce students to remote computing. One of the final objectives of the Machine Learning course is that their final model runs on a supercomputer.

 

What is the added value for the students?

Companies like VITO or Janssen Pharmaceutics have their own supercomputing infrastructure with a slightly different interface, but it is all terminal-based. Once they get on board with the philosophy, students can work on other infrastructures too.

So, it gives them additional skills in their first steps in the job market.
Or if they want to do a Ph.D., they are one step ahead. Many of our Ph.D. students get the same training as the one we provide in our Machine learning course.

If we ask Ph.D. students who have worked on their Ph.D. for several years to work with a supercomputer, they are reluctant to make that switch. They think: ‘I will just leave my laptop on for a few nights'. You can compare it with learning a new language. The sooner you start, the better. The earlier students learn their way around a supercomputer, the better. 

Which students are currently involved in this? 

We currently focus on students of the Master of Statistics & Data Science students. Still, these are elective courses and can also be taken by others (students entering from biology, biostatistics or bioinformatics). This is the student's choice, but we are considering how to make this compulsory for all our students because supercomputing is also relevant in statistics, bioinformatics, epidemiology etc. ….

How did you convince the students to step in? How did you take away the threshold with your students?

I first explained the project's added value and told them it was necessary. It is a competence evaluated through a pass/fail system. The student fails if one cannot run the script on the supercomputer.

I test two competencies in this way: first, are students technically proficient enough to submit a job on a supercomputer? Second: are they able to write a script? Because I still notice a lot when practically writing scripting languages (especially R), students often have a text file full of R code, which they run block by block. They select a few lines of code, run that code, scroll to another piece in their document, and then select the code they wish to run. This is incompatible with the philosophy of submitting a job to a supercomputer. There, students have to program in a logical line.

Did you have any lessons learned from this experience? How would you address these?

Because I am the only professor requiring this, it’s a bit of a forced exercise: you see students checking this off and struggling for a while but quickly falling back into old habits. To better consolidate this, I recommend that every professor works with the pass/fail method if possible.

I also noticed that students don’t start working on the supercomputer immediately after the introduction. Because the assignment is to run the final model on the supercomputer, which is sometime in December, they postponed it until then. Luckily the introduction session was recorded so they could look at it again. So, during that period, we received many questions about how to work because it was too long ago.
For the future, we are thinking about ways to activate students to work with it immediately and consolidate their knowledge. We could explain how to do it and let them practice through exercises so they do it repeatedly. After three or four times, they will master the skill.

Why would you convince other professors to use supercomputing?

For me, it was a need that became more pressing in the master thesis. If students want to deal with data and start to apply to compute there, they have to switch to a more extensive system.  Introducing them to supercomputing in early in their career is an excellent way to prepare them for this.
In the course where I apply it, it is not strictly necessary, but this way, they can already learn those skills on a small case and see the added value of such a system.

The more we require students to use this, the more natural this becomes, and that’s a plus.

Thank you, prof. Valkenborg for this inspirational talk. We wish you good luck with the course! 


EuroCC Belgium encourages the use of supercomputing in education and wants to facilitate others to teach how to utilise a supercomputer effectively. Therefore, we provide a professional teaching kit, including a comprehensive slide deck. More material (videos etc.) will follow soon.

Check out our teaching kit
via this link. Unlock the potential of others with our training tools!

 

 

?Attention: 3-6 October 2023 Comprehensive General LUMI course - Warsaw/Online

LUMI Supercomputer

This four-day on-site (Warsaw, Poland) and online course serve as a general comprehensive introduction to the LUMI architecture and programming environment. It will include lessons about compiling and using software, programming models (HIP and OpenMP offload), porting, executing jobs, and optimizing applications to run on AMD MI250X. After the course, you will be able to work efficiently on both the CPU (LUMI-C) as well as GPU partition (LUMI-G).

Please note, that this is a comprehensive course with an emphasis on code development and analysis tools.

The four-day course is “on-site first”: on-site participants get plenty of time to interact with the LUST, HPE, and AMD staff that is present, but we cannot organize an equivalent interaction with the online participants. For them, it is mostly a broadcast of the lectures with the option to raise questions via a shared document. Anything that would deteriorate the experience for the on-site participants is avoided. The recent course in Tallinn has shown that the direct interaction with the people present in the room gives a lot of additional value to those people that is hard to get in an online setup. 

Target group

The course is intended for users with ongoing projects on LUMI, users with project proposals in one of the national or EuroHPC channels, and support staff of local organizations of the LUMI consortium members.

More information is available at: https://lumi-supercomputer.eu/events/general-lumi-course-oct2023/

Registration - deadline: 25/09/2023 at 16:00 CEST

Register here: https://events.plgrid.pl/e/lumi-general-course

 

?New User Story - How supercomputing helps Atlas Copco to better explore and design filter media

Streamlines flow through a filter medium coloured by velocity magnitude

Read ? our latest user story ? ? https://www.enccb.be/atlascopco

Did you know Atlas Copco extensively uses computing, computer models and digital twins to predict the behaviour of various products in different conditions. Tom Saenen (Atlas Copco, Technology developer Computational Fluid Dynamics): "Supercomputing allows for more extensive exploration and optimisation of a design. It also permits the simulation of more physics and larger problems that were infeasible before, like for instance having a better understanding of microscale air and oil flow behaviour in oil aerosol filter media. ” Want to know how supercomputing helped to get insight into the design of filter media leading to cleaner air delivered at a lower energy cost for the customer? 

06/06/2023 EuroCC Training - Performance-aware C++ programming

C++ code

6 June 2023 - Performance-aware C++ programming 

Description

The C++ programming language is a systems-level language pervasive in many areas including scientific computing. This complex language provides users with a multitude of approaches to solving complex tasks, but not all programming paradigms or approaches are equal in terms of performance.

This 'back-to-basics' training shifts slightly away from over-abstraction or potential overuse of libraries and focuses primarily on performance through the use of the language itself. We show significant speedups are possible through several key principles:

  1. Having some understanding of the hardware (bandwidth, latency, caches)
  2. Data-Oriented Programming versus Object-Oriented Programming
  3. Vectorization and parallelization
  4. Compiler and how to get the most out of it

By the end of the training you will have some intuition on performance and what affects it, as well as pointers for improving performance in your own projects.

Whether your goal is shortening the calculation time of your simulations/workloads or enabling solutions which are otherwise impractical, extracting the most out of the available hardware is a key skill. This training provides an introduction to this topic with a few exploratory trips into the core details. 

Target audience:

Intermediate level, some experience with C++ is recommended
Anyone with an interest in performance-critical applications

Practical information 

Date & time: 6/06/2023 - from 9.00 am to 11.30 am 

Location: online (after registration, a link will be sent to you prior to the training)

Registrations for this training are now closed.