Login | |

Username: | |

Password: | |

[highlight]These notes are made from:
[link href='https://www.futurelearn.com/courses/intro-to-quantum-computing']Understanding Quantum Computing - Keio University (Future Learn)[/link][/highlight]
[hr]
So problems are orgnaised into complexity, polynomial time is O(n^k), exponential time is O(e^n), and linear is O(n).
But many 100's of algorithms for Quantum Computers have been found that would allow one to decrease the complexity significantly, it wouldn't be quite O(e^n) to O(n^k), but more like n^4 to n^2, or sometimes more.
These would make Quantum Computers very useful, as they can do very hard problems in shorter amounts of time, so there is a want for them.
[h2] Moore's Law[/h2]
Not realy a law, more like a trend which Gordon Moore articulated in 1965, in which the number of transistors in a chip doubles every 2-3 years.
It's more of an economic 'law' - given the current fabrication technology, where is the price per transistor minimised.
But it is coming to an end.
Transistors today have a countable number of atoms, they are made of silicon and are generally 10-15 nanometres accross.
A typical bit of a CPU will have a silicon channel, and a gate, which will be kind of placed in the channel. This will control the flow of electrons through the semiconductor silicon channel.
When a certain voltage is applied to the gate, electrons cannot flow through it, other voltages would allow electrons to flow.
A transistor requires the introduction of impurities intentionally, a [b]dopant[/b], and the process is called doping, to be made. It allows for influencing the electrical properties of the semiconductor, and make it suitable
to be a transistor, which can make it easier or harder for electron to flow.
Once the number of atoms in a transistor drops below a certain level, the dopants will not be able to influence the electrons properly, and that mat be a major factor to the end of Moore's Law, as there will eventually
be an ultimate physical limit to how small the transistors can get.
Trade-offs can be made, more transistors can be added by adding transistors one atop another, but the heat produced would become unmanegable.
Instead scientists are working on numerous 'post siliconne' technologies, including photonics, and Quantum machines.
Furthermore development on Quantum Computers also directly contributes to concepts such as reversible computing.
[highlight]
[h2]Reversible Computing[/h2][br]
Reversible computing is basically the concept that you can retrieve the inputs of a calculation from its outputs, by adding some extra component to the output.
An irreverisble operation is one such as addition, as you cannot infer what the input was from the output
However by outputting both the addition and subtraction of the input, makes it possible to uniquely identify what the inputs were (if there were 2 components), and that is a reversible operation.
Reversible operations are not affected by Landauer's Principle, that is that there is an absolute minimum amount of energy required for an irreverisble computation, but this does not apply to irreverisble operations, so
they are useful, as they can surpass the minimum energy required with no obstruction, leading to more and more efficient machines.
However they have to be reversible right up to the algorithmic level, so they would be incredibly hard to implement if they can even exist, and they would likely have limited use cases, as they would likely not be
general purpose machines, or we make new reversible programming languages, and make everything around reversible concepts.
And as such this would mean to make reversible programs, lots of human insight would be needed, rather than some sort of conversion program, so would be hard to develop all of these programs, but it would not be
impossible, if the machine isn't impossible, as reversible programming languages do exist.
And eventually, it could cut down energy wasted by computers.... to zero.
[/highlight]
[h2] Factoring large numbers and Quantum Machines[/h2]
Encryption consists of three main stages - Authentication (normally using public key cryptography, RSA for example), Key exchange (using the Deffie-Hellman key exchange for example), and then bulk data encryption using
the keys.
RSA (created by Rivest, Shamir and Adelman), is a form of public key cryptography, where a private key is generated which can decrypt data, a public key is generated which can encrypt data, and then the data has
mathematical functions applied to it using the public key to encrypt, and then eventually decrypt it.
The Deffie-Hellman key exchange is an algorithm used to establish a shared secret between two parties. It is primarily used as a method of exchanging cryptography keys for use in symmetric encryption algorithms, look
it up for more information, as it's pretty involved.
Now those two previous steps both use large numbers, and if it became easier to factor those large numbers, it would be easy for people to eavesdrop on your internet based communication, even with banks and stuff,
which is a huge potential security risk, but in other fields it could be of great social and economic interest.
In the 1990's Shor's Algorithm to factor huge numbers in O(n^3) (polynomial time), rather than O(sqrt(10^n)), for quantum machines, which led to a huge influx of people and money into the field.
[h2]Algorithms and games[/h2]
Chess has 10^120 possible moves, Shogi has 10^220, and Go with 10^360, which are all huge amounts of moves.
The most straighforward method to figure out how to win is to use the [b]Brute force approach[/b], that is, to try every possible combination of pieces in the game, and then know the ideal move for each state of the
board.
In board games specifically, after taking a move, there are certain number of ways in which the game could branch off, and the amount of moves you have to consider at each step is called the branching factor, and each
step is called a ply. If there are [i]n[/i] and there are [i]k[/i] branching factors, that means there are n^k possibilities you need to investigate
But this is still too large a number to compute for even supercomputers, and as such humans and the computers would rely on heuristics, that is, relying on 'good' moves, and 'good' options, for example, it can be
considered good to capture the opponents piece, so the computer can be made to focus and spend time on those branches more than others, though this may not lead to an optimal solution, and this relies on the
problem to be structured, as in giving some sort of evidence that there is a way to guide your program towards a good if not optimal solution.
[b]Grover's Algorithm[/b], which is more like a broad algorithmic framework with wide use cases, is a quantum algorithm in the case where there is no structure to guide the program. If there is no way to discern
whether a candidate at a certain ply is a good or bad candidate, it can be very effective to use it, though the advantage of Grover's algorithm isn't as great as Shor's algorithm, due to different use cases, as it reduces
the number of searches to the square root of the original number, so for chess that is 10^120, down to 10^60, so one must be careful when picking problems to use Grover's algorithm on.
The algorithm, or more correctly, [i]algorithmic framework[/i] can be applied to basically any search problem, where we are looking for a path to a particular result, so small data problems with high branching factors,
like the aforementioned board games, are good candidates to use the algorithm.
[h2]Quantum Chemistry[/h2]
It's difficult for a classical machine to solve Quantum Chemistry problems, which would include the quantum mechanical interaction of particles, and so Feynmann proposed that a machine be made that operates on those
very principles to solve those problems.
In Quantum Chemistry researchers are interested a lot in the ground state elctronic potential of the electrons, and that makes it difficult for a classical computer to compute, as the number of atoms and electrons
involved increases exponentially the more and more you add.
The size of the Hilbert space (the set of quantum states), grows very large with more and more particles, and in fact it becomes harder to deal with, using all known classical algorithms, which is where Quantum
computers would become useful.
Quantum chemistry is very broad, and includes stuff like finding the vibrational structure, which would be related to the thermal properties of atoms, and then people are interested in knowing how atoms may respond to
certain optical frequencies, and what colour they may give off, and what they absorb. To understand why these phenomena are as such, is what Quantum chemistry/physics may help to explain.
With drug design for instance, if you're looking for a specific drug which binds to a specific active site in a protein for example, you'd have to just try everything and see which one works for that specific site, but
using Quantum machines, you can try these large amount of combinations faster, with larger sets of data, which would allow for better research into drug design and material design.
Note that some people are interested also in just developing the tool set that can be generalised and applied to a wide range of systems to come up with very good solutions with the tools based in quantum machines.
One key place where Quantum machines may help greatly is making something like the Haber-Bosch process be more efficient, that is, using the nitrogen in the air to make fertilizers, but at the present moment, that
uses 1%-2% of global energy production, so that would be very beneficial.
This problem can be specifically tackled, as a place where the problem arises is due to it's catalysts, and those catalysts are large molecules and atoms, so Quantum machines can be used to go through and find better
catalysts for that problem, to make it more energy efficient.
[h2]Linear Algebra[/h2]
Quantum machines can offer a significant speedup on many linear algebra problems (involving matrices and vectors), such as multiplying matrices and finding the exponents of matrices.
For some specialised variants of these problems, quantum machines offer superpolynomial speedups, for others the speedup is polynomial.
[h2]Machine Learning[/h2]
Machine Learning, and specifically [i]deep learning[/i], for stuff like facial recognition, voice recognition etc, can be offered considerable speedups as a result of the fact that many of the subroutines are built upon linear
algebra. Furthermore, this is particularly economically attractive to help bring to realisation, as it can offer many solutions to real world problems, and just like for linear algebra, there are potential superpolynomial
speedups for some specialised variants of these problems.
[h2]Graph Algorithms[/h2]
Graph problems can also be offered polynomial speedup, such as shortest path finding algorithms between two points, or finding if two points are even connected in a very large graph (a graph consists of nodes and
verticies - lines). Sometimes you'd also want to know how many triangles are formed by the vertices and nodes in the graph, and quantum machines can be useful for these as well.
[h2]Optimisation[/h2]
A more general benefit would be optimisation, in the field known as operations research, which involves optimising some value or processes for a business, for example, reducing the amount of materials used by a factory
- maximising profit based on a set of assumptions.
Organisations would focus on if their problem can be solved, given the size of the problem and some characteristics of the input data.
Many specialised algorithms for certain problems existt, but how well they work for different problems which may arise in the real world is as of yet unknown, but a field of great interest and much research, and one of
the greatest reasons to build quantum computers.
[h2]Function Growth[/h2]
I've noted Shor's Algorithm, Quantum Chemistry algorithms and search algorithms thus far.
Grover's Algorithm can be used with as a search algorithm, but it is an algorithm which offers a square root of the original complexity, for the new time taken, so sped up to square root of *N* where *N* is referring to
the original complexity. It is a very general algorithm, more like a broad framework, and can be applied to any problem to speed it up on a quantum computer.
It is also imortant to note, that even without reducing the complexity of a problem, if the same problem can be applied on a Quantum Machine, it would be much faster than a classical computer, so would still be
advantageous.
Also note that range matters - when a problem is applied to a machine, and it is of different complexity to the original algorithm, it would possibly be faster to execute the problem using the classical algorithm for small
use cases, as the quantum algorithm may in fact be slower over some ranges.
For example, O(n^2) would be faster than O(100n + 5000), before *n* is 100, and as such the original quadratic complexity would be prefarable in that case, so range really does matter.
[h2]Picking good problems[/h2]
For the foreseeable future Quantum Computers will be relatively bad at reading and writing classical data, and as such the requirements of input and output for a problem would have to be relatively low.
On top of that the memeory requirements for the process would need to be quite low, that is below about 10^4 qubits in size, as for a while Quantum memory will not be anywhere near the gigabytes we have on
classical machines.
And thirdly, the problem should be hard to solve on a classical computer, as in the possible number of answers to be checked must increase quickly, relative to how fast the size of the problem increases, to make it
suitable for quantum computation, so as to apply stuff such as Grover's Algorithm (more like a framework again, but whatever).
[h2]Quantum Computing /= Supercomputing and Big Data[/h2]
Super computers typically get terrabytes of input data, from various sensors, and have these inputted to eventually use that data to simulate and predict weather patterns, earthquakes, or to help in the design process of
aircraft or spaceship, which may possibly be very little input data, but terrabytes of output data from the simulation itself. As a result the problems they deal with would be unsuitable for a quantum machine which
currently have very low ability to input and output data to classical forms, and very low amount of memory, so wouldn't be able to process it all correctly.
Big data typically refers to data which is procured cumulitavely about people - the stuff they purchased, the websites they visited, the amount of times they go to places, their habits on certain applications etc, which
can be useful to analyse for business to realise the patterns, and use them to the advantage of their business, but as there is so much data, lots of memory would be needed to look through it all and to process it to
extract patterns, and so may not be a good suit as a problem for a quantum computer.
[h2]Volatile and non volatile storage[/h2]
Volatile storage is where active programs are sotred, and the data which would currently be in use, or may shortly be in use, whereas non volatile data is generally storage which would not be lost once the power is
turned off so it would be stuff like the ROM, rather than the RAM, as it doesn't necessarily need power to preserve it's data.
Quantum computers currently only have volatile 'storage', as they can in fact not store any of their data on some form of quantum hard disk (there are no such things as of yet), and the quantum data isn't really
reusable - it is used and then disposed of in most cases, after processing.
[h2]A balance[/h2]
As Quantum Computers are very slow at inputting and outputting data compared to modern supercomputers (which can input/output at a similar pace to what they can process at), there is potential use to use a hybrid
Quantum-Classical machine, where the high data parts are processed in the classical machines, and the very slow processing for the classical machine, could be handed over to the Quantum computer to do in faster
times, so Quantum and classical machines would be complementary, rather than comptetetive.
[h2]Waves[/h2]
Just basic transverse wave stuff to keep in mind, alongside that a sine wave is one of the most basic forms of wave.
Also a standing wave is a wave that appears to oscillate, or repeat similar actions repeatedly in place. A kind of standing wave is a half wave, where the entire standing wave goes up in one direction, and then
afterwards goes down in the other, or a whole wave, in which half the standing wave goes up, the other goes down, and it oscillates, and the standing waves can be of even higher orders.
When the peaks/troughs of waves are on the same points or the waves are [i]in phase[/i], constructive interference occurs, and the amplitude add up and increase, but as they shift further apart, they become less and
less constructive, and eventually when the peak of one wave and the trough of another wave are at the same point the waves are called [i]out of phase[/i], destructive interference occurs, and the amplitude subtract and
decrease, possibly even cancelling each other out if the amplitude is equal.
When these sort of waves are allowed to continue, an overall pattern of constructive and destructive interference emmerges, with periodic increases and decreases of the amplitude as a result of the interference rather
than as a result of the waves - this is called [b]'beating'[/b].
The same pattern applies to longitudinal waves such as sound.
Interference can be in more than one dimension, there can be 2 dimensional interference, and there can in fact be [i]N[/i]-dimensional interference, (though humans can't really visuallise more than 3), but Quantum
computers can handle them, and can in fact create interference across many vairables, which can be treated as separate dimensions (will be discussed in depth in week 3).
[h2]More detailed waves and Superposition[/h2]
For inteference patterns, the frequency, amplitude, and phase of waves all contributes to the compound/synthetic wave which occurs as a result of the interference of two waves.
The frequency of the two waves would dictate what the interference pattern would be like in the manner that if the directions of travel are negative of one another, the wave would appear to be a standing wave,
oscillating in a set pattern, otherwise it would be beating if they are not opposite of one another.
The amplitude of the waves would also decide the output amplitude of the synthetic/compound wave, depending on how much they were out of phase, or if their direction of travel was negative.
If the waves are directly out of phase, so like the peaks and troughs are at the same points, and the amplitude and frequence are equal, the wave would in fact cancel out.
[b]Superposition is just the collective sum of the effects of the waves on the medium, so it is basically interference, but defined slightly differently.[/b]