Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM)

Overview

Minimisation of a negative log likelihood fit to extract the lifetime of the D^0 meson (MNLL2ELDM)

Introduction

The average lifetime of the $D^{0}$ mesons was computed from 10,000 experimental data of the decay time and the associated error by minimising the negative log-likelihood (NLL) corresponding to cases with and without the background signals. In the absence of possible background signals, the parabolic minimisation method was employed, yielding the average lifetime as $(404.5 +/- 4.7) x 10^-15 seconds with a tolerance level of 10^-6. This result was found to be inconsistent with the literature value provided by the Particle Data Group, showing a deviation of approximately 6 x 10^-15 seconds. By considering possible background signals, an alternative distribution and the corresponding NLL were derived. This was subsequently minimised using the gradient, Newton's and the Quasi-Newton methods, yielding consistent results. The average lifetime and the fraction of the background signals in the sample were estimated to be (409.7 +/- 5.5) x 10^-15 seconds and 0.0163 +/- .0086$, respectively, where the uncertainties were calculated using an error matrix and the correlation coefficient was found to be -0.4813. The literature value lies within the uncertainty, showing a percentage difference of approximately 0.098%. Thus the results verify the presence of the background signals in the data and validate the theory of the expected distribution derived by assuming the background signal as a Gaussian due the limitation of the detector resolution.

Requirements

Python 2.x is required to run the script

Create an environment using conda as follows:

  conda create -n python2 python=2.x

Then activate the new environment by:

  conda activate python2

Results

figure1

Figure 1: Histogram of the measured decay time of D^0 mesons and the expected distribution with various tau and sigma in the units of picoseconds. The figure illustrates that the average lifetime is approximately between 0.4 ps and 0.5 ps, being closer to the former value. The second figure clearly demonstrates that the distribution with tau = 0.4 ps and sigma = 0.2 ps fits the profile of the histogram the most closest.


figure2

Figure 2: Result of the minimisation using the parabolic method on a hyperbolic cosine function. The initial guesses were 2 ps, 3 ps and 5 ps, and the minimum is estimated to be at tau = 2.80 x 10^-11 (3 s.f.) using a tolerance level of 10^-6.


figure3

Figure 3: Graph of the 1-D NLL. The minimisation yielded the minimum as tau_min = 0.4045 ps correct to 4 d.p. with a tolerance level of 10^-6. The minimum was originally estima- ted to be roughly 0.40 ps, which is equal to the result correct to 2 d.p. Moreover, the parabola with a curvature of 22,572 illustrates its suitability in approximating the minimum.


figure4

Figure 4: The dependence of the standard deviation on the number of measurements in logarithmic scales. The minimisation of NLL function took initial guesses of 0.2 ps, 0.3 ps and 0.5 ps. Each figure depicts a linearly decreasing pattern of the standard deviation with the number of measurements in logarithmic scales. Thus a linear fit was applied and it was extrapolated, assuming the pattern stayed linear in the region of interest. The extrapolation yielded the required number of measurements for an accuracy of 10^-15 s as (2.3 to 2.6) x 10^5.


figure5

Figure 5: Contour plots of the 2D hyperbolic cosine function showing the result from the minimisation with an initial condition of (x, y) = (-2.5, 3.0), step-length of alpha = 0.05 and a tolerance level of 10^-6. The left figure is an enlarged version of the right. The minimum estimated using the Quasi-Newton, gradient and Newton's methods are: (x, y) = (-1.92, 1.91) x 10^-5, (x, y) = (-1.86, 1.96) x 10^-5 and (x, y) = (-2.42 x 10^-13, 6.72 x 10^-8} with 213, 222 and 5 iterations, respectively. The results graphically demonstrate the minimisation process with all the methods yielding expected results and thus confirming the validity of the computation. The paths generated by the Quasi-Newton and the gradient methods show only a small difference with similar number of iterations, whereas Newton's method illustrates a greater converging speed.


figure6

Figure 6: Contour plots of the 2D NLL function showing the result from the minimisation with initial condition of (a, tau) = (0.2, 0.4 ps), step-length of alpha = 0.00001 and a tolerance level of 10^-6. The plot of the left is an enlarged version of the plot on the right. The positions of the minimum estimated using the Quasi-Newton, gradient and Newton's methods were identical correct to 4 d.p. The estimated position of the minimum is (a, tau) = (0.9837, 0.4097 ps) with 98 iterations for the first two methods and 6 for the third. The figures show that the paths taken during the minimisation process are almost identical for the Quasi-Newton and the gradient method; the blue curve virtually superimposes the green curve. The path generated by Newton's method, on the other hand, differs and identifies the minimum in relatively small number of iterations. Note: CDS was used to approximate the gradients for this particular result.


figure8

Figure 7: The error ellipse - a contour plot corresponding to one standard deviation change in the parameters above the minimum.

🔗 Links

linkedin

License

MIT License

Owner
Son Gyo Jung
Son Gyo Jung
利用python脚本实现微信、支付宝账单的合并,并保存到excel文件实现自动记账,可查看可视化图表。

KeepAccounts_v2.0 KeepAccounts.exe和其配套表格能够实现微信、支付宝官方导出账单的读取合并,为每笔帐标记类型,并按月份和类型生成可视化图表。再也不用消费一笔记一笔,每月仅需10分钟,记好所有的帐。 作者: MickLife Bilibili: https://spac

159 Jan 01, 2023
A set of tools to pre-calibrate and calibrate (multi-focus) plenoptic cameras (e.g., a Raytrix R12) based on the libpleno.

COMPOTE: Calibration Of Multi-focus PlenOpTic camEra. COMPOTE is a set of tools to pre-calibrate and calibrate (multifocus) plenoptic cameras (e.g., a

ComSEE - Computers that SEE 4 May 10, 2022
PyTorch implementation of the REMIND method from our ECCV-2020 paper "REMIND Your Neural Network to Prevent Catastrophic Forgetting"

REMIND Your Neural Network to Prevent Catastrophic Forgetting This is a PyTorch implementation of the REMIND algorithm from our ECCV-2020 paper. An ar

Tyler Hayes 72 Nov 27, 2022
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.

MiVOS (CVPR 2021) - Mask Propagation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [arXiv] [Paper PDF] [Project Page] [Papers with Code] This repo impleme

Rex Cheng 106 Jan 03, 2023
[PAMI 2020] Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-segmentation

Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-segmentation This repository contains the source code for

Yun-Chun Chen 60 Nov 25, 2022
Autonomous Robots Kalman Filters

Autonomous Robots Kalman Filters The Kalman Filter is an easy topic. However, ma

20 Jul 18, 2022
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences an

Microsoft 8k Jan 04, 2023
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models.

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models

AdvBox 1.3k Dec 25, 2022
Revisiting Self-Training for Few-Shot Learning of Language Model.

SFLM This is the implementation of the paper Revisiting Self-Training for Few-Shot Learning of Language Model. SFLM is short for self-training for few

15 Nov 19, 2022
A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

Open3DSOT A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning. The official code release of BAT an

Kangel Zenn 172 Dec 23, 2022
Analyzing basic network responses to novel classes

novelty-detection Analyzing how AlexNet responds to novel classes with varying degrees of similarity to pretrained classes from ImageNet. If you find

Noam Eshed 34 Oct 02, 2022
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

57 Nov 28, 2022
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two

512x512 flowers after 12 hours of training, 1 gpu 256x256 flowers after 12 hours of training, 1 gpu Pizza 'Lightweight' GAN Implementation of 'lightwe

Phil Wang 1.5k Jan 02, 2023
Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Gyeongjae Choi 17 Sep 23, 2021
This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction".

TreePartNet This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction". Depende

刘彦超 34 Nov 30, 2022
PRTR: Pose Recognition with Cascade Transformers

PRTR: Pose Recognition with Cascade Transformers Introduction This repository is the official implementation for Pose Recognition with Cascade Transfo

mlpc-ucsd 133 Dec 30, 2022
Supervised multi-SNE (S-multi-SNE): Multi-view visualisation and classification

S-multi-SNE Supervised multi-SNE (S-multi-SNE): Multi-view visualisation and classification A repository containing the code to reproduce the findings

Theodoulos Rodosthenous 3 Apr 15, 2022
Official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right"

Surface Form Competition This is the official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right" We p

Peter West 46 Dec 23, 2022
(ICCV 2021) PyTorch implementation of Paper "Progressive Correspondence Pruning by Consensus Learning"

CLNet (ICCV 2021) PyTorch implementation of Paper "Progressive Correspondence Pruning by Consensus Learning" [project page] [paper] Citing CLNet If yo

Chen Zhao 22 Aug 26, 2022
Implements a fake news detection program using classifiers.

Fake news detection Implements a fake news detection program using classifiers for Data Mining course at UoA. Description The project is the categoriz

Apostolos Karvelas 1 Jan 09, 2022