Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add QIR examples & update README #37

Merged
merged 5 commits into from
Oct 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/format.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10"]
python-version: ["3.11"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
Expand All @@ -23,4 +23,4 @@ jobs:
pip install "black[jupyter]"
- name: Check formating with black
run: |
black algorithms qbraid_lab qbraid_sdk --check
black algorithms qbraid_lab qbraid_sdk qbraid_qir --line-length=100 --check
29 changes: 25 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,29 @@
# qBraid Lab Demos

# qBraid Demos and Tutorials

Welcome to the qBraid Demos Repository! This repository contains a collection of Jupyter Notebooks showcasing how to use **qBraid's open-source SDKs** for quantum computing. It includes hands-on examples that demonstrate the integration of various tools with **qBraid-Lab**, making it easier to run quantum computing experiments seamlessly in the cloud.

[<img src="https://qbraid-static.s3.amazonaws.com/logos/Launch_on_qBraid_white.png" width="150">](https://account.qbraid.com?gitHubUrl=https://github.com/qBraid/qbraid-lab-demo.git)

qBraid is a cloud-based platform for quantum computing.
## Contents

This repository features tutorials and examples for the following:

- [**qBraid-SDK**](https://docs.qbraid.com/sdk/): Learn how to interact with quantum devices using the qBraid Runtime framework.
- [**qBraid-QIR**](https://docs.qbraid.com/qir/): Explore qBraid's Quantum Intermediate Representation (QIR) interface using familiar frameworks like OpenQASM 3 and Cirq.
- [**qBraid-CLI**](https://docs.qbraid.com/cli/): Understand how to manage quantum jobs and resources using qBraid's Command Line Interface.
- [**qBraid-Lab**](https://docs.qbraid.com/lab/) Integration:
- [**Quantum Jobs**](https://docs.qbraid.com/lab/user-guide/quantum-jobs): Learn how to submit and manage quantum jobs in qBraid Lab.
- [**GPUs**](https://docs.qbraid.com/lab/user-guide/gpus): Learn how to accelerate your quantum simulations using GPUs available through qBraid Lab.

## Documentation and Resources

For more information on qBraid and its tools, check out:

- **qBraid Platform Documentation**: Comprehensive user guides and resources at [docs.qbraid.com](https://docs.qbraid.com/).
- **qBraid Software API Reference**: Detailed API documentation for the qBraid-SDK at [sdk.qbraid.com](https://sdk.qbraid.com/).
- **qBraid GitHub Community**: Join the discussion and contribute at [GitHub Community Page](https://github.com/qBraid/community).

This repository provides demos and tutorials on using the [qBraid SDK](https://github.com/qBraid/qBraid), [qBraid CLI](https://docs.qbraid.com/projects/cli/en/latest/cli/qbraid.html), and qBraid Lab features including [Quantum Jobs](https://docs.qbraid.com/projects/lab/en/latest/lab/quantum_jobs.html), and [GPUs](https://docs.qbraid.com/projects/lab/en/latest/lab/gpu.html).
## Contribution

For more resources and reference material, see [qBraid Docs](https://docs.qbraid.com/en/latest/).
Contributions to this repository are welcome! If you encounter any issues or would like to contribute improvements or new examples, please open an issue or submit a pull request.
44 changes: 13 additions & 31 deletions algorithms/quantum_opt_rl/QCO.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -230,9 +230,7 @@
"# Adjust the optimization level to see how the number of passes changes\n",
"pass_manager = generate_preset_pass_manager(3, backend)\n",
"\n",
"print(\n",
" f\"The pass manager for {FakeGuadalupeV2.backend_name} has the following passes: \\n\"\n",
")\n",
"print(f\"The pass manager for {FakeGuadalupeV2.backend_name} has the following passes: \\n\")\n",
"\n",
"# Print out the passes in the pass manager\n",
"for i, pass_ in enumerate(pass_manager.passes()):\n",
Expand Down Expand Up @@ -406,9 +404,7 @@
"optimization_levels = [0, 1, 2, 3]\n",
"\n",
"\n",
"def optimize_circuits(\n",
" circuit: QuantumCircuit, optimization_levels: List[int]\n",
") -> List[float]:\n",
"def optimize_circuits(circuit: QuantumCircuit, optimization_levels: List[int]) -> List[float]:\n",
" \"\"\"Optimize the circuit at each optimization level\"\"\"\n",
" # Create an empty list to store the times\n",
" times = []\n",
Expand Down Expand Up @@ -856,9 +852,7 @@
],
"source": [
"# 1q gate optimization\n",
"commutative_cancellation = PassManager(\n",
" [CommutativeCancellation(basis_gates=[\"cx\", \"u\", \"id\"])]\n",
")\n",
"commutative_cancellation = PassManager([CommutativeCancellation(basis_gates=[\"cx\", \"u\", \"id\"])])\n",
"\n",
"# Get the result circuit\n",
"result = commutative_cancellation.run(circuit)\n",
Expand Down Expand Up @@ -1366,11 +1360,11 @@
"metadata": {},
"outputs": [],
"source": [
"sub_batch_size = 64 # cardinality of the sub-samples gathered from the current data in the inner loop\n",
"num_epochs = 10 # optimization steps per batch of data collected\n",
"clip_epsilon = (\n",
" 0.2 # clip value for PPO loss: see the equation in the intro for more context.\n",
"sub_batch_size = (\n",
" 64 # cardinality of the sub-samples gathered from the current data in the inner loop\n",
")\n",
"num_epochs = 10 # optimization steps per batch of data collected\n",
"clip_epsilon = 0.2 # clip value for PPO loss: see the equation in the intro for more context.\n",
"gamma = 0.99\n",
"lmbda = 0.95\n",
"entropy_eps = 1e-4"
Expand Down Expand Up @@ -1623,9 +1617,7 @@
],
"source": [
"actor_net = agent.model\n",
"policy_module = TensorDictModule(\n",
" actor_net, in_keys=[\"observation\"], out_keys=[\"loc\", \"scale\"]\n",
")\n",
"policy_module = TensorDictModule(actor_net, in_keys=[\"observation\"], out_keys=[\"loc\", \"scale\"])\n",
"print(env.action_spec.shape[-1])\n",
"\n",
"policy_module = ProbabilisticActor(\n",
Expand Down Expand Up @@ -1767,9 +1759,7 @@
}
],
"source": [
"advantage_module = GAE(\n",
" gamma=gamma, lmbda=lmbda, value_network=value_module, average_gae=True\n",
")\n",
"advantage_module = GAE(gamma=gamma, lmbda=lmbda, value_network=value_module, average_gae=True)\n",
"print(advantage_module.__dict__)\n",
"lr = 3e-4\n",
"\n",
Expand All @@ -1789,9 +1779,7 @@
"# loss_module = loss_module.set_keys\n",
"\n",
"optim = torch.optim.Adam(loss_module.parameters(), lr)\n",
"scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(\n",
" optim, total_frames // frames_per_batch, 0.0\n",
")"
"scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, total_frames // frames_per_batch, 0.0)"
]
},
{
Expand Down Expand Up @@ -1828,9 +1816,7 @@
" subdata = replay_buffer.sample(sub_batch_size)\n",
" loss_vals = loss_module(subdata.to(device))\n",
" loss_value = (\n",
" loss_vals[\"loss_objective\"]\n",
" + loss_vals[\"loss_critic\"]\n",
" + loss_vals[\"loss_entropy\"]\n",
" loss_vals[\"loss_objective\"] + loss_vals[\"loss_critic\"] + loss_vals[\"loss_entropy\"]\n",
" )\n",
"\n",
" # Optimization: backward, grad clipping and optimization step\n",
Expand All @@ -1843,9 +1829,7 @@
"\n",
" logs[\"reward\"].append(tensordict_data[\"next\", \"reward\"].mean().item())\n",
" pbar.update(tensordict_data.numel() * frame_skip)\n",
" cum_reward_str = (\n",
" f\"average reward={logs['reward'][-1]: 4.4f} (init={logs['reward'][0]: 4.4f})\"\n",
" )\n",
" cum_reward_str = f\"average reward={logs['reward'][-1]: 4.4f} (init={logs['reward'][0]: 4.4f})\"\n",
" logs[\"step_count\"].append(tensordict_data[\"step_count\"].max().item())\n",
" stepcount_str = f\"step count (max): {logs['step_count'][-1]}\"\n",
" logs[\"lr\"].append(optim.param_groups[0][\"lr\"])\n",
Expand All @@ -1861,9 +1845,7 @@
" # execute a rollout with the trained policy\n",
" eval_rollout = env.rollout(1000, policy_module)\n",
" logs[\"eval reward\"].append(eval_rollout[\"next\", \"reward\"].mean().item())\n",
" logs[\"eval reward (sum)\"].append(\n",
" eval_rollout[\"next\", \"reward\"].sum().item()\n",
" )\n",
" logs[\"eval reward (sum)\"].append(eval_rollout[\"next\", \"reward\"].sum().item())\n",
" logs[\"eval step_count\"].append(eval_rollout[\"step_count\"].max().item())\n",
" eval_str = (\n",
" f\"eval cumulative reward: {logs['eval reward (sum)'][-1]: 4.4f} \"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,12 +89,8 @@ def create_random_circuit(self, seed: Optional[int] = None) -> QuantumCircuit:
"""
if seed is not None:
np.random.seed(seed)
qc_1 = random_circuit(
self.num_qubits, np.random.randint(self.min_depth, self.max_depth)
)
qc_2 = random_circuit(
self.num_qubits, np.random.randint(self.min_depth, self.max_depth)
)
qc_1 = random_circuit(self.num_qubits, np.random.randint(self.min_depth, self.max_depth))
qc_2 = random_circuit(self.num_qubits, np.random.randint(self.min_depth, self.max_depth))

qc_1.compose(qc_2, inplace=True)

Expand All @@ -121,9 +117,7 @@ def _create_tensor_from_circuit(self, circuit: QuantumCircuit) -> torch.Tensor:
"""

# The tensor is of shape (num_qubits, num_gates, num_params)
circuit_tensor = torch.zeros(
(circuit.num_qubits, len(self.qiskit_gates), self.num_actions)
)
circuit_tensor = torch.zeros((circuit.num_qubits, len(self.qiskit_gates), self.num_actions))

for _, op in enumerate(circuit):
name = op.operation.name
Expand All @@ -142,9 +136,7 @@ def _create_tensor_from_circuit(self, circuit: QuantumCircuit) -> torch.Tensor:
def get_qiskit_gates(self, seed: Optional[int] = None):
"""Returns a dictionary of all qiskit gates with random parameters"""
qiskit_gates = {
attr: None
for attr in dir(standard_gates)
if attr[0] in string.ascii_uppercase
attr: None for attr in dir(standard_gates) if attr[0] in string.ascii_uppercase
}

# Add random parameters to gates for circuit generation
Expand All @@ -156,9 +148,7 @@ def get_qiskit_gates(self, seed: Optional[int] = None):
]
params = self._generate_params(varnames, seed=seed)
qiskit_gates[gate] = getattr(standard_gates, gate)(**params)
qiskit_gates = OrderedDict(
{v.name: v for _, v in qiskit_gates.items() if v is not None}
)
qiskit_gates = OrderedDict({v.name: v for _, v in qiskit_gates.items() if v is not None})
# Add iSwap gate
qiskit_gates["iswap"] = iSwapGate()
return qiskit_gates
Expand Down
16 changes: 4 additions & 12 deletions qbraid_lab/fire_opal/get-started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -354,9 +354,7 @@
"metadata": {},
"outputs": [],
"source": [
"supported_devices = fireopal.show_supported_devices(credentials=credentials)[\n",
" \"supported_devices\"\n",
"]\n",
"supported_devices = fireopal.show_supported_devices(credentials=credentials)[\"supported_devices\"]\n",
"for name in supported_devices:\n",
" print(name)"
]
Expand Down Expand Up @@ -501,9 +499,7 @@
],
"source": [
"print(f\"Success probability: {100 * bitstring_results[0]['11111111111']:.2f}%\")\n",
"plot_bv_results(\n",
" bitstring_results[0], hidden_string=\"11111111111\", title=f\"Fire Opal ($n=11$)\"\n",
")"
"plot_bv_results(bitstring_results[0], hidden_string=\"11111111111\", title=f\"Fire Opal ($n=11$)\")"
]
},
{
Expand Down Expand Up @@ -552,15 +548,11 @@
"circuit_qiskit = qiskit.QuantumCircuit.from_qasm_str(circuit_qasm)\n",
"ibm_result = sampler.run(circuit_qiskit).result()\n",
"ibm_probabilities = (\n",
" ibm_result.quasi_dists[0]\n",
" .nearest_probability_distribution()\n",
" .binary_probabilities(num_bits=11)\n",
" ibm_result.quasi_dists[0].nearest_probability_distribution().binary_probabilities(num_bits=11)\n",
")\n",
"\n",
"print(f\"Success probability: {100 * ibm_probabilities['11111111111']:.2f}%\")\n",
"plot_bv_results(\n",
" ibm_probabilities, hidden_string=\"11111111111\", title=f\"{backend_name} ($n=11$)\"\n",
")"
"plot_bv_results(ibm_probabilities, hidden_string=\"11111111111\", title=f\"{backend_name} ($n=11$)\")"
]
},
{
Expand Down
4 changes: 1 addition & 3 deletions qbraid_lab/gpu/cirq_VQE_cuQuantum.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -444,9 +444,7 @@
"def energy_func(length, h, jr, jc):\n",
" def energy(measurements):\n",
" # Reshape measurement into array that matches grid shape.\n",
" meas_list_of_lists = [\n",
" measurements[i * length : (i + 1) * length] for i in range(length)\n",
" ]\n",
" meas_list_of_lists = [measurements[i * length : (i + 1) * length] for i in range(length)]\n",
" # Convert true/false to +1/-1.\n",
" pm_meas = 1 - 2 * np.array(meas_list_of_lists).astype(np.int32)\n",
"\n",
Expand Down
4 changes: 1 addition & 3 deletions qbraid_lab/gpu/lightning_gpu_benchmark.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -177,9 +177,7 @@
" qml.CNOT(wires=[i, (i + 1) % n_wires])\n",
"\n",
" # Measure all qubits\n",
" observables = [qml.PauliZ(n_wires - 1)] + [\n",
" qml.Identity(i) for i in range(n_wires - 1)\n",
" ]\n",
" observables = [qml.PauliZ(n_wires - 1)] + [qml.Identity(i) for i in range(n_wires - 1)]\n",
" return qml.expval(qml.operation.Tensor(*observables))"
]
},
Expand Down
4 changes: 1 addition & 3 deletions qbraid_lab/quantum_jobs/aws_iqm_quantum_jobs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -106,9 +106,7 @@
"\n",
"# The IQM Garnet device\n",
"device = AwsDevice(\"arn:aws:braket:eu-north-1::device/qpu/iqm/Garnet\")\n",
"supported_gates = device.properties.action[\n",
" \"braket.ir.openqasm.program\"\n",
"].supportedOperations\n",
"supported_gates = device.properties.action[\"braket.ir.openqasm.program\"].supportedOperations\n",
"# print the supported gate set\n",
"print(\"Gate set supported by the IQM device:\\n\", supported_gates)"
]
Expand Down
Loading
Loading