Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: nils-wisiol/pypuf
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v2.3.1
Choose a base ref
...
head repository: nils-wisiol/pypuf
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref

Commits on Aug 16, 2021

  1. Copy the full SHA
    56d1ea2 View commit details
  2. Copy the full SHA
    9dee98b View commit details
  3. Copy the full SHA
    5b9b70a View commit details
  4. Copy the full SHA
    ae0cf1b View commit details
  5. Copy the full SHA
    27e0a71 View commit details

Commits on Aug 17, 2021

  1. Copy the full SHA
    b91d644 View commit details
  2. Copy the full SHA
    daa6eee View commit details

Commits on Aug 18, 2021

  1. Copy the full SHA
    1d896f7 View commit details
  2. Copy the full SHA
    480a105 View commit details
  3. io: adds documentation

    nils-wisiol committed Aug 18, 2021
    Copy the full SHA
    3e10c5b View commit details
  4. io: removes unused code

    nils-wisiol committed Aug 18, 2021
    Copy the full SHA
    cf49c8e View commit details
  5. Copy the full SHA
    5a918b1 View commit details

Commits on Aug 19, 2021

  1. Copy the full SHA
    ac8fa5a View commit details
  2. Release Version v3.0.0

    nils-wisiol committed Aug 19, 2021
    Copy the full SHA
    ead7fdf View commit details
  3. Copy the full SHA
    12a046e View commit details
  4. Copy the full SHA
    587fb04 View commit details
  5. Copy the full SHA
    6caaaa0 View commit details
  6. Release Version v3.1.0

    nils-wisiol committed Aug 19, 2021
    Copy the full SHA
    4de7b5e View commit details

Commits on Aug 20, 2021

  1. Copy the full SHA
    2329101 View commit details
  2. Copy the full SHA
    c2f0d80 View commit details
  3. Copy the full SHA
    f50cd6f View commit details

Commits on Sep 24, 2021

  1. Copy the full SHA
    60de24c View commit details
  2. Copy the full SHA
    9e609ac View commit details

Commits on Sep 27, 2021

  1. Copy the full SHA
    ed63388 View commit details
  2. adds simulation, attack, and metrics related to optical PUFs

    Co-Authored-By: Adomas Baliuka <A.Baliuka@physik.uni-muenchen.de>
    nils-wisiol and Adomas Baliuka committed Sep 27, 2021
    Copy the full SHA
    c23ba88 View commit details
  3. Release Version v3.2.0

    nils-wisiol committed Sep 27, 2021
    Copy the full SHA
    d3657ad View commit details

Commits on Nov 12, 2021

  1. meta: fixes Nils' ORCID

    nils-wisiol committed Nov 12, 2021
    Copy the full SHA
    0ed582e View commit details
  2. Copy the full SHA
    017d039 View commit details

Commits on Nov 15, 2021

  1. Copy the full SHA
    22b8040 View commit details
  2. Release Version v3.2.1

    nils-wisiol committed Nov 15, 2021
    Copy the full SHA
    7d32efa View commit details
  3. Copy the full SHA
    aac7e73 View commit details
  4. Copy the full SHA
    4bd9417 View commit details

Commits on Jun 29, 2024

  1. Copy the full SHA
    3550337 View commit details
  2. Copy the full SHA
    da15c58 View commit details
  3. Copy the full SHA
    157efca View commit details
  4. Copy the full SHA
    3e49340 View commit details
  5. Copy the full SHA
    e0103c2 View commit details
21 changes: 21 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the version of Python and other tools you might need
build:
os: ubuntu-20.04
tools:
python: "3.9"

# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py

# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/requirements.txt
27 changes: 27 additions & 0 deletions CITATION.cff
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
cff-version: 1.1.0
message: pypuf is published via Zenodo. To refer to pypuf, please use DOI 10.5281/zenodo.3901410.
title: pypuf: Cryptanalysis of Physically Unclonable Functions
doi: 10.5281/zenodo.3901410
publisher: Zenodo
url: https://doi.org/10.5281/zenodo.3901410
authors:
- family-names: Wisiol
given-names: Nils
orcid: "https://orcid.org/0000-0003-2606-614X"
- family-names: Gräbnitz
given-names: Christoph
- family-names: Mühl
given-names: Christopher
- family-names: Zengin
given-names: Benjamin
- family-names: Soroceanu
given-names: Tudor
- family-names: Pirnay
given-names: Niklas
- family-names: Mursi
given-names: Khalid T.
orcid: "https://orcid.org/0000-0001-8032-8484"
- family-names: Baliuka
given-names: Adomas
orcid: "https://orcid.org/0000-0002-7064-8502"
license: GPL 3.0
22 changes: 16 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -13,9 +13,14 @@ Please check out the [pypuf hello world](https://pypuf.readthedocs.io/en/latest/

## Studies and Results

pypuf is used in the following projects:

- 2021, Wisiol et al.: Neural-Network-Based Modeling Attacks on XOR Arbiter PUFs Revisited
pypuf is used in a number of PUF-related research projects.
If you would like to add your project to the list, please open an issue or send an email.
In reverse chronological order:

- 2021, Wisiol: [Towards Attack Resilient Arbiter PUF-Based Strong PUFs](https://eprint.iacr.org/2021/1004):
Design proposal for the [LP-PUF](https://github.com/nils-wisiol/LP-PUF), claimed to mitigate known modeling attacks
while having reliable responses.
- 2021, Wisiol et al.: [Neural-Network-Based Modeling Attacks on XOR Arbiter PUFs Revisited](https://eprint.iacr.org/2021/555)
- 2020, Wisiol et al.: [ Splitting the Interpose PUF: A Novel Modeling Attack Strategy](https://eprint.iacr.org/2019/1473):
Modeling attacks on the Interpose PUF using Logistic Regression in a Divide-and-Conquer strategy.
- 2020, Wisiol et al.: [Short Paper: XOR Arbiter PUFs have Systematic Response Bias](https://eprint.iacr.org/2019/1091):
@@ -26,15 +31,16 @@ pypuf is used in the following projects:
Simulation of the stabiltiy of Majority Vote XOR Arbiter PUFs.

Please check out the [archived version of pypuf v1](https://github.com/nils-wisiol/pypuf/tree/v1) to find the
original code used in these projects.
original code used some of the older projects.

## Citation

To refer to pypuf, please use DOI `10.5281/zenodo.3901410`.
pypuf is published [via Zenodo](https://zenodo.org/badge/latestdoi/87066421).
Please cite this work as

> Nils Wisiol, Christoph Gräbnitz, Christopher Mühl, Benjamin Zengin, Tudor Soroceanu, Niklas Pirnay, & Khalid T. Mursi.
> Nils Wisiol, Christoph Gräbnitz, Christopher Mühl, Benjamin Zengin, Tudor Soroceanu, Niklas Pirnay, Khalid T. Mursi,
> & Adomas Baliuka.
> pypuf: Cryptanalysis of Physically Unclonable Functions (Version 2, June 2021). Zenodo.
> https://doi.org/10.5281/zenodo.3901410
@@ -48,7 +54,8 @@ or use the following BibTeX:
Benjamin Zengin and
Tudor Soroceanu and
Niklas Pirnay and
Khalid T. Mursi},
Khalid T. Mursi and
Adomas Baliuka},
title = {{pypuf: Cryptanalysis of Physically Unclonable
Functions}},
year = 2021,
@@ -62,6 +69,8 @@ or use the following BibTeX:
## Contribute

Testing, linting, licensing.
When first contributing, make sure to update the author lists in README.md (2x), index.rst of the docs (2x), and
CITATION.cff (1x).

### Run Tests

@@ -75,6 +84,7 @@ Testing, linting, licensing.

### Maintainer: Prepare New Release

1. Make sure author lists are up-to-date.
1. Make sure docs are testing and building without error (see above)
1. Commit all changes
1. Clean up `dist/` folder
10 changes: 10 additions & 0 deletions docs/appendix/bibliography.rst
Original file line number Diff line number Diff line change
@@ -4,6 +4,8 @@ Bibliography
..
Using Zotero "export bibliography" feature to clipboard, using Nature style. Index labels are created manually.
.. [AM21] Aghaie, A. & Moradi, A. Inconsistency of Simulation and Practice in Delay-based Strong PUFs. IACR
Transactions on Cryptographic Hardware and Embedded Systems 520–551 (2021) doi:10.46586/tches.v2021.i3.520-551.
.. [AZ17] Alkatheiri, M. S. & Zhuang, Y. Towards fast and accurate machine learning attacks of feed-forward arbiter
PUFs. in 2017 IEEE Conference on Dependable and Secure Computing 181–187 (2017). doi:10.1109/DESEC.2017.8073845.
.. [AZA18] Aseeri, A. O., Zhuang, Y. & Alkatheiri, M. S. A Machine Learning-Based Security Vulnerability Study on XOR
@@ -15,18 +17,26 @@ Bibliography
.. [CCLSR11] Chen, Q., Csaba, G., Lugli, P., Schlichtmann, U. & Ruhrmair, U. The Bistable Ring PUF: A new architecture
for strong Physical Unclonable Functions. in 2011 IEEE International Symposium on Hardware-Oriented Security and
Trust 134–141 (IEEE, 2011). doi:10.1109/HST.2011.5955011.
.. [CCPG21] Charlot, N., Canaday, D., Pomerance, A. & Gauthier, D. J. Hybrid Boolean Networks as Physically Unclonable
Functions. IEEE Access 9, 44855–44867 (2021).
.. [DV13] Delvaux, J. & Verbauwhede, I. Side channel modeling attacks on 65nm arbiter PUFs exploiting CMOS device noise.
in Hardware-Oriented Security and Trust (HOST), 2013 IEEE International Symposium on 137–142 (IEEE, 2013).
.. [GCvDD02] Gassend, B., Clarke, D., van Dijk, M. & Devadas, S. Silicon Physical Random Functions. in Proceedings of
the 9th ACM Conference on Computer and Communications Security 148–160 (ACM, 2002). doi:10.1145/586110.586132.
.. [GFS19] Ganji, F., Forte, D. & Seifert, J.-P. PUFmeter a Property Testing Tool for Assessing the Robustness of
Physically Unclonable Functions to Machine Learning Attacks. IEEE Access 7, 122513–122521 (2019).
.. [GLCDD04] Gassend, B., Lim, D., Clarke, D., Dijk, M. van & Devadas, S. Identification and authentication of
integrated circuits. Concurrency and Computation: Practice and Experience 16, 1077–1098 (2004).
.. [LMN93] Linial, N., Mansour, Y. & Nisan, N. Constant Depth Circuits, Fourier Transform, and Learnability. J. ACM 40,
607–620 (1993).
.. [MTZAA20] Mursi, K. T., Thapaliya, B., Zhuang, Y., Aseeri, A. O. & Alkatheiri, M. S. A Fast Deep Learning Method for
Security Vulnerability Study of XOR PUFs. Electronics 9, 1715 (2020).
.. [MKP08] Majzoobi, M., Koushanfar, F. & Potkonjak, M. Lightweight Secure PUFs. in Proceedings of the 2008 IEEE/ACM
International Conference on Computer-Aided Design 670–673 (IEEE Press, 2008).
.. [NSJM19] Nguyen, P. H. et al. The Interpose PUF: Secure PUF Design against State-of-the-art Machine Learning Attacks.
IACR Transactions on Cryptographic Hardware and Embedded Systems 243–290 (2019) doi:10.13154/tches.v2019.i4.243-290.
.. [ODon14] O’Donnell, R. Analysis of Boolean Functions. (Cambridge University Press, 2014).
.. [RHUWDFJ13] Rührmair, U. et al. Optical PUFs Reloaded. https://eprint.iacr.org/2013/215 (2013).
.. [RSSD10] Rührmair, U. et al. Modeling Attacks on Physical Unclonable Functions. in Proceedings of the 17th ACM
Conference on Computer and Communications Security 237–249 (ACM, 2010). doi:10.1145/1866307.1866335.
.. [SD07] Suh, G. E. & Devadas, S. Physical Unclonable Functions for Device Authentication and Secret Key Generation.
116 changes: 116 additions & 0 deletions docs/attacks/linear_regression.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
Linear Regression
=================

Linear Regression fits a linear function on given data. The resulting linear function, also called map, is guaranteed
to be optimal with respect to the total squared error, i.e. the sum of the squared differences of actual value and
predicted value.

Linear Regression has many applications, in pypuf, it can be used to model :doc:`../simulation/optical` and
:doc:`../simulation/arbiter_puf`.


Arbiter PUF Reliability Side-Channel Attack [DV13]_
---------------------------------------------------

For Arbiter PUFs, the reliability for any given challenge :math:`c` has a close relationship with the difference in
delay for the top and bottom line. When modeling the Arbiter PUF response as

.. math::
r = \text{sgn}\left[ D_\text{noise} + \langle w, x \rangle \right],
where :math:`x` is the feature vector corresponding to the challenge :math:`c` and :math:`w \in \mathbb{R}^n` are the
weights describing the Arbiter PUF, and :math:`D_\text{noise}` is chosen from a Gaussian distribution with zero mean
and variance :math:`\sigma_\text{noise}^2` to model the noise, then we can conclude that

.. math::
\text{E}[r(x)] = \text{erf}\left( \frac{\langle w, x \rangle}{\sqrt{2}\sigma_\text{noise}} \right).
Hence, the delay difference :math:`\langle w, x \rangle` can be approximated based on an approximation of
:math:`\text{E[r(x)]}`, which can be easily obtained by an attacker. It gives

.. math::
\langle w, x \rangle = \sqrt{2}\sigma_\text{noise} \cdot \text{erf}^{-1} \text{E}[r(x)].
This approximation works well even when :math:`\text{E}[r(x)]` is approximated based on only on few responses, e.g. 3
(see below).

To demonstrate the attack, we initialize an Arbiter PUF simulation with noisiness chosen such that the reliability
will be about 91% *on average*:

>>> import pypuf.simulation, pypuf.io, pypuf.attack, pypuf.metrics
>>> puf = pypuf.simulation.ArbiterPUF(n=64, noisiness=.25, seed=3)
>>> pypuf.metrics.reliability(puf, seed=3).mean()
0.908...

We then create a CRP set using the *average* value of responses to 500 challenges, based on 5 measurements:

>>> challenges = pypuf.io.random_inputs(n=puf.challenge_length, N=500, seed=2)
>>> responses_mean = puf.r_eval(5, challenges).mean(axis=-1)
>>> crps = pypuf.io.ChallengeResponseSet(challenges, responses_mean)

Based on these approximated values ``responses_mean`` of the linear function :math:`\langle w, x \rangle`, we use
linear regression to find a linear mapping with small error to fit the data. Note that we use the ``transform_atf``
function to compute the feature vector :math:`x` from the challenges :math:`c`, as the mapping is linear in :math:`x`
(but not in :math:`c`).

>>> attack = pypuf.attack.LeastSquaresRegression(crps, feature_map=lambda cs: pypuf.simulation.ArbiterPUF.transform_atf(cs, k=1)[:, 0, :])
>>> model = attack.fit()

The linear map ``model`` will predict the delay difference of a given challenge. To obtain the predicted PUF response,
this prediction needs to be thresholded to either -1 or 1:

>>> model.postprocessing = model.postprocessing_threshold

To measure the resulting model accuracy, we use :meth:`pypuf.metrics.similarity`:

>>> pypuf.metrics.similarity(puf, model, seed=4)
array([0.902])


Modeling Attack on Integrated Optical PUFs [RHUWDFJ13]_
-------------------------------------------------------

The behavior of an integrated optical PUF token can be understood as a linear map
:math:`T \in \mathbb{C}^{n \times m}` of the given challenge, where the value of :math:`T` are determined by the given
PUF token, and :math:`n` is number of challenge pixels, and :math:`m` the number of response pixels.
The speckle pattern of the PUF is a measurement of the intensity of its electromagnetic field at the output, hence the
intensity at a given response pixel :math:`r_i` for a given challenge :math:`c` can be written as

.. math::
r_i = \left| c \cdot T \right|^2.
pypuf ships a basic simulator for the responses of :doc:`../simulation/optical`, on whose data a modeling attack
can be demonstrated. We first initialize a simulation and collect challenge-response pairs:

>>> puf = pypuf.simulation.IntegratedOpticalPUF(n=64, m=25, seed=1)
>>> crps = pypuf.io.ChallengeResponseSet.from_simulation(puf, N=1000, seed=2)

Then, we fit a linear map on the data contained in ``crps``. Note that the simulation returns *intensity* values rather
than *field* values. We thus need to account for quadratic terms using an appropriate
:meth:`feature map <pypuf.attack.LeastSquaresRegression.feature_map_optical_pufs_reloaded_improved>`.

>>> attack = pypuf.attack.LeastSquaresRegression(crps, feature_map=pypuf.attack.LeastSquaresRegression.feature_map_optical_pufs_reloaded_improved)
>>> model = attack.fit()

The success of the attack can be visually inspected or quantified by the :doc:`/metrics/correlation` of the response
pixels:

>>> crps_test = pypuf.io.ChallengeResponseSet.from_simulation(puf, N=1000, seed=3)
>>> pypuf.metrics.correlation(model, crps_test).mean()
0.69...

Note that the correlation can differ when additionally, post-processing of the responses is performed, e.g. by
thresholding the values such that half the values give -1 and the other half 1:

>>> import numpy as np
>>> threshold = lambda r: np.sign(r - np.quantile(r.flatten(), .5))
>>> pypuf.metrics.correlation(model, crps_test, postprocessing=threshold).mean()
0.41...


API
---

.. autoclass::
pypuf.attack.LeastSquaresRegression
:members: __init__, fit, model, feature_map_optical_pufs_reloaded, feature_map_optical_pufs_reloaded_improved
43 changes: 43 additions & 0 deletions docs/attacks/lmn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
LMN Algorithm
=============

The LMN Algorithm can compute models for functions with Fourier spectra concentrated on the low degrees [LMN93]_,
[ODon14]_ and part of the PUFMeter [GFS19]_ PUF testing toolbox.

The attack requires access to a number of uniformly random challenge-response pairs (CRPs). Depending on the amount of
provided CRPs and the type of function provided, the results may have, with certain probability, a guaranteed accuracy.
(For more details, we refer the reader to the PAC learning framework [ODon14]_.)

Example Usage
-------------

To run the attack, CRP data of the PUF token under attack is required. Such data can be obtained through experiments
on real hardware, or using a simulation. In this example, we use the pypuf Arbiter PUF simulator and configure it to
use feature vectors as inputs rather than challenge vectors by setting ``transform='id'``:

>>> import pypuf.simulation, pypuf.io
>>> puf = pypuf.simulation.ArbiterPUF(n=32, transform="id", seed=1)
>>> challenges = pypuf.io.random_inputs(n=32, N=2000, seed=2)
>>> crps = pypuf.io.ChallengeResponseSet(challenges, puf.val(challenges))

To run the attack, we need to decide how many levels of Fourier coefficients we want to approximate. There are
:math:`n` levels, the :math:`i`-th level has :math:`\binom{n}{i}` coefficients. The run time of the attack is linear
in the number of coefficients. With the steeply increasing run time, in practice, degree 1 and degree 2 are reasonable
choices. Increasing the degree further will not only lead to high requirements on computation time, but may actually
`worsen` the predictive accuracy, as more coefficients need to be approximated.

>>> import pypuf.attack
>>> attack = pypuf.attack.LMNAttack(crps, deg=1)
>>> model = attack.fit()

The model accuracy can be measured using the pypuf accuracy metric :meth:`pypuf.metrics.accuracy`.

>>> import pypuf.metrics
>>> pypuf.metrics.similarity(puf, model, seed=4)
array([0.95])

API
---

.. autoclass:: pypuf.attack.LMNAttack
:members: __init__, fit, model
4 changes: 2 additions & 2 deletions docs/attacks/lr.rst
Original file line number Diff line number Diff line change
@@ -26,15 +26,15 @@ need careful adjustment for each choice of security parameters in the PUF. Then
>>> attack.fit() # doctest:+ELLIPSIS +NORMALIZE_WHITESPACE
Epoch 1/100
...
50/50 [==============================] - ... - loss: 0.4... - accuracy: 0.9... - val_loss: 0.4556 - val_accuracy: 0.9600
50/50 [==============================] - ... - loss: 0.4... - accuracy: 0.9... - val_loss: 0.4643 - val_accuracy: 0.9620
<pypuf.simulation.base.LTFArray object at 0x...>
>>> model = attack.model

The model accuracy can be measured using the pypuf accuracy metric :meth:`pypuf.metrics.accuracy`.

>>> import pypuf.metrics
>>> pypuf.metrics.similarity(puf, model, seed=4)
array([0.957])
array([0.966])

Applicability
-------------
4 changes: 2 additions & 2 deletions docs/attacks/mlp.rst
Original file line number Diff line number Diff line change
@@ -31,15 +31,15 @@ need careful adjustment for each choice of security parameters in the PUF. Then
>>> attack.fit() # doctest:+ELLIPSIS +NORMALIZE_WHITESPACE
Epoch 1/30
...
495/495 [==============================] - ... - loss: 0.0... - accuracy: 0.9... - val_loss: 0.0658 - val_accuracy: 0.9730
495/495 [==============================] - ... - loss: 0.0... - accuracy: 0.9... - val_loss: 0.0670 - val_accuracy: 0.9750
<pypuf.attack.mlp2021.MLPAttack2021.Model object at 0x...>
>>> model = attack.model

The model accuracy can be measured using the pypuf accuracy metric :meth:`pypuf.metrics.accuracy`.

>>> import pypuf.metrics
>>> pypuf.metrics.similarity(puf, model, seed=4)
array([0.963])
array([0.97])

Example Usage [AZA18]_
----------------------
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
@@ -23,7 +23,7 @@
master_doc = 'index'

# The full version, including alpha/beta/rc tags
release = '2.3.1'
release = '3.2.1'


# -- General configuration ---------------------------------------------------
Loading