Magnetism in Superconductors

In this project, we will analyse data collected by Ana-Elena Tutueanu where we will look for a magnetic signal from neutrons sent onto the super-conductor La2βˆ’xSrxCuO4\text{La}_{2-x}\text{Sr}_x\text{CuO}_4 where x=0.07x=0.07 at different temperatures.

We expect to see two gaussian signals at q=1Β±β‰ˆ0.1q=1 \pm \approx0.1, and want to investigate at which temperature we lose the signal is it around Tc=16KT_c=16K.

Basics

First we want to import our libraries.

import numpy as np import matplotlib.pyplot as plt from scipy import stats from scipy.optimize import curve_fit, fsolve from IPython.display import display, Latex

Load data

Before we load the data, we want to expand the file ranges to have a list of all the files in a dataset for each dataset.

temps = [1.5,4,7,10,20,40] file_ranges = [ [18536,18538], [18642,18644], [18645,18647], [18605,18607], [18608,18610], [[18543,18545],[18578,18580]], ] expand = lambda limits: list(range(limits[0],limits[1]+1)) # expand the file ranges. expanded_files = [] for limits in file_ranges: if isinstance(limits[0], int): expanded_files.append(expand(limits)) else: sub_expanded = [] for sublimits in limits: sub_expanded.extend(expand(sublimits)) expanded_files.append(sub_expanded)

Next we define some settings and helper functions to load our files.

data_dir = "data" filename = lambda number: f"{data_dir}/0{number}" loadfile = lambda filename: np.loadtxt(filename, skiprows=57).T qcol = 2 ncol = 8 q = 1 delta = 0.1

To check that we load the files correctly, we plot all the files for each temperature.

fig,axes = plt.subplots(2,3, figsize=(14,8)) axes_flat = np.ndarray.flatten(axes) for i,ax in enumerate(axes_flat): ax.set_title(f"T={temps[i]}K") ds = [loadfile(filename(expanded_files[i][j])) for j in range(len(expanded_files[i]))] for j,data in enumerate(ds): ax.plot(data[qcol],data[ncol],label=f"{expanded_files[i][j]}") ax.set_xlabel("$q [q_0]$") ax.set_ylabel("N over $122$s") ax.legend() axes_flat[-1].set_ylabel("N over $61$s") fig.suptitle("Raw data") fig.tight_layout(pad=3.0)

png

We see the 40K40K dataset has twice the number of files, but run for half the amount of time, so we need to create a combined 40K40K dataset.

Investigate overlap in T=40KT=40K

It appears that iith file in each of the two ranges measure the same values. Which when combined gives you 33 combined ranges. However, we want to know if we can assume the iith qq value are the same.

mean_error_q_interval = [] for i in range(3): data1 = loadfile(filename(expanded_files[5][i])) data2 = loadfile(filename(expanded_files[5][i+3])) mean_error_q_interval.append(np.mean(np.abs(data1[qcol]- data2[qcol]))) for i,err in enumerate(mean_error_q_interval): display(Latex("Average absolute difference between $q$ values in the {}'th subset {:.0E}.".format(i+1,err)))

Average absolute difference between qq values in the 1'th subset 9E-05

Average absolute difference between qq values in the 2'th subset 2E-05

Average absolute difference between qq values in the 3'th subset 1E-04

Since the absolute differences are so low, we will assume the qqs match up, and we can now create a combined 4040K dataset by pair-wise adding the counts in the files of the two series.

for i in range(3): file_numbers = [expanded_files[5][j] for j in [i,i+3]] data1 = loadfile(filename(file_numbers[0])) data2 = loadfile(filename(file_numbers[1])) plt.plot(data1[qcol],data1[ncol]+data2[ncol], label=f"{file_numbers[0]}+{file_numbers[1]}") plt.title(f"T=40K") plt.xlabel("$q [q_0]$") plt.ylabel("N over $122$s") plt.legend() plt.show()

png

Final data

We can now create 66 datasets - one for each of the measure temperatures - where we combine the three ranges of qq we have measured.

datasets = [] # handle first 5 datasets for files in expanded_files[:-1]: ds = [] for number in files: ds.append(loadfile(filename(number))) data_combined = np.concatenate(ds,axis=1) datasets.append(data_combined) # handle 40K dataset ds = [] for i in range(3): data1 = loadfile(filename(expanded_files[-1][i])) data2 = loadfile(filename(expanded_files[-1][i+3])) data1[ncol] += data2[ncol] ds.append(data1) data_combined = np.concatenate(ds,axis=1) datasets.append(data_combined)

Plot raw data with combined datasets

fig,axes = plt.subplots(2,3, figsize=(14,8)) axes_flat = np.ndarray.flatten(axes) for i,ax in enumerate(axes_flat): ax.set_title(f"T={temps[i]}K") ax.errorbar(datasets[i][qcol], datasets[i][ncol],yerr=np.sqrt(datasets[i][ncol]),fmt=".",label="Raw data") ax.set_xlabel("$q [q_a]$") ax.set_ylabel("N over $122$s") ax.set_ylim(11000,16000) ax.legend() fig.suptitle("Combined datasets.") fig.tight_layout(pad=3.0)

png

Subtract background and fit

We can now subtract the 40K40K background dataset from the other 55 datasets. We note that the uncertainty of the iith datapoint for each temperature is Οƒi=Ni+Ni,40K\sigma_i = \sqrt{N_i + N_{i,40K}} where Ni,40KN_{i,40K} is the iith count of the 40K40K background dataset.

We fit the signals with two gaussians on top of a linear background. The gaussians are constrained such that they use the same Οƒ\sigma since this is dominated by the uncertainty of the equipment, and we constrain it further by forcing the centers ΞΌ=1Β±Ξ΄\mu=1\pm\delta.

# fit function pieces straight_line = lambda x, a,b: a*x+b gauss = lambda x, A, mu, sigma: A * np.exp(-0.5*((x - mu) / sigma)**2.0) # setup plots and axies fig,axes = plt.subplots(2,3, figsize=(14,8)) axes = [plt.subplot2grid((2,6), pos, colspan=2) for pos in [(0,0),(0,2),(0,4),(1,1),(1,3)]] # a b delta sig A A p0 = [[300, 200, 0.1, 0.04, 500, 500], [300,-100, 0.1, 0.04, 0, 0], [300,-300, 0.05, 0.04, 500, 500], [300,-300, 0.05, 0.04, 200, 200], [300,-300, 0.04, 0.02, 200, 200]] popts = [] psigs = [] for i,ax in enumerate(axes): ax.set_title(f"T={temps[i]}K") qs = datasets[i][qcol] signal_ns = datasets[i][ncol] bg_ns = datasets[-1][ncol] ns = signal_ns-bg_ns # The new sigma is based on the combined sigma_ns = np.sqrt(signal_ns+bg_ns) ax.errorbar(qs, ns, yerr=sigma_ns, fmt=".",label="Data - Background") ax.set_xlabel("$q [q_a]$") ax.set_ylabel("N over 122s") ax.set_ylim(-750,1600) # linear fit linpopt,linpcov = curve_fit(straight_line, qs, ns, p0=p0[i][:2], sigma=sigma_ns,maxfev=10000, absolute_sigma=True) linpsig = np.diag(np.sqrt(linpcov)) # double gauss fit model = lambda x, *p: straight_line(x,p[0],p[1]) + \ gauss(x,p[4],1-p[2],p[3]) + \ gauss(x,p[5],1+p[2],p[3]) popt,pcov = curve_fit(model, qs, ns, p0=p0[i], sigma=sigma_ns,maxfev=10000, absolute_sigma=True) psig = np.diag(np.sqrt(pcov)) popts.append(popt) psigs.append(psig) chi2r = lambda x,y,s,f,p: np.sum((y-f(x,*p))**2/s**2)/(len(x)-len(p)) chi2r_model = chi2r(qs,ns,sigma_ns,model,popt) chi2r_line = chi2r(qs,ns,sigma_ns,straight_line,linpopt) ax.text(0.73,1200,"$A_1$={:.0f}$\pm${:.0E}".format(popt[-2],psig[-2])) ax.text(0.73,1050,"$A_2$={:.0f}$\pm${:.0E}".format(popt[-1],psig[-1])) ax.text(0.73,800,"$\chi^2_m/n$={:.4f}".format(chi2r_model)) ax.text(0.73,650,"$\chi^2_l/n$={:.4f}".format(chi2r_line)) xs=np.linspace(min(qs), max(qs)) ax.plot(xs,straight_line(xs,*linpopt),'g', label="Linear fit",alpha=0.8) ax.plot(xs,model(xs,*p0[i]),"--",c='orange',label="Initial Fit",alpha=0.45) ax.plot(xs,model(xs,*popt),'orange', label="Model Fit") ax.legend() fig.tight_layout(pad=1.0)
/opt/conda/envs/python3/lib/python3.7/site-packages/ipykernel_launcher.py:43: RuntimeWarning: invalid value encountered in sqrt
/opt/conda/envs/python3/lib/python3.7/site-packages/ipykernel_launcher.py:51: RuntimeWarning: invalid value encountered in sqrt

png

We see that for the 1.51.5K dataset, the two-gaussian model fits the data nicely. With the peaks being more than 2Οƒ2\sigma away from 00, and the model Ο‡m2/n\chi^2_m/n is much close to 11 than the linear fit Ο‡l2/n\chi^2_l/n. We thus conclude that we do see a magnetic signal at T=1.5KT=1.5K.

If we look at the higher temperatures, it becomes more nuanced. We have greater 2Οƒ2\sigma peaks for both gaussians up to 7K7K, and just around 2Οƒ2\sigma for 7K7K while just above 2Οƒ2\sigma again for 10K10K. For 20K20K, we clearly have less than 2Οƒ2\sigma peaks. The Ο‡2/n\chi^2/n for the model and the line fits also become comparable somewhere between 10K10K and 20K20K.

This would suggest that the critical temperature lies between 10K10K and 20K20K.

Investigations into the critical temperature TcT_c

Ideally, we would like to constrain our critical temperature to be more specific than somewhere between 1010 and 20K20K. One way of doing this is by fitting the significance of the peaks (A/Οƒ)(A/\sigma) over temperature.

We the uncertainties for AA through curve_fit, and we can use Barlow 5.24 to estimate the error in Οƒ\sigma

σσ=Οƒ2N\sigma_\sigma = \frac{\sigma}{\sqrt{2N}}

Through error propagation, we find for f(A,σ)=Aσf(A,\sigma)=\frac{A}{\sigma}

Οƒf2=(dfdA)2ΟƒA2+(dfdΟƒ)2σσ2=(1Οƒ)2ΟƒA2+(βˆ’AΟƒ2)2σσ2\begin{aligned}\sigma_f^2 &= \Big(\frac{df}{dA}\Big)^2 \sigma_A^2 + \Big(\frac{df}{d\sigma}\Big)^2 \sigma_\sigma^2 \\ &= \Big(\frac{1}{\sigma}\Big)^2 \sigma_A^2 + \Big(\frac{-A}{\sigma^2}\Big)^2 \sigma_\sigma^2 \end{aligned}

where Οƒ=ΟƒA\sigma=\sigma_A.

popts = np.array(popts) psigs = np.array(psigs) A1col, A2col = 4,5 fig, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(18,6)) ax1.plot(temps[:-1], popts[:,A1col]/psigs[:,A1col],".",c="tab:blue", label="$A_1/ \sigma_{A_1}$") ax1.plot(temps[:-1], popts[:,A2col]/psigs[:,A2col],".",c="tab:orange", label="$A_2/ \sigma_{A_2}$") ax1.set_title("Significance of $A$ without errorbars") ax1.legend() ax2.errorbar(temps[:-1], popts[:,A1col]/psigs[:,A1col],yerr=psigs[:,A1col], fmt=".", c="tab:blue", label="$A_1/ \sigma_{A_1}$") ax3.errorbar(temps[:-1], popts[:,A2col]/psigs[:,A1col],yerr=psigs[:,A2col], fmt=".", c="tab:orange", label="$A_2/ \sigma_{A_2}$") ax2.set_title("Significance of $A_1$ with errorbars.") ax2.legend() ax3.set_title("Significance of $A_2$ with errorbars.") ax3.legend() decay = lambda x, A, tau: A/(x**tau) #A*np.exp(-tau*x) sigmas = lambda A, As, ss: np.sqrt((1/As)**2 *As**2 + (-A/As**2)**2 * ss**2) sigmasA1,sigmasA2 = sigmas(popts[:,[A1col,A2col]], psigs[:,[A1col,A2col]], psigs[:,[A1col,A2col]]/np.sqrt(2*len(psigs[:,[A1col]]))).T poptA1, pcovA1 = curve_fit(decay, temps[:-1], popts[:,A1col]/psigs[:,A1col], absolute_sigma=True, sigma=sigmasA1) poptA2, pcovA2 = curve_fit(decay, temps[:-1], popts[:,A2col]/psigs[:,A2col], absolute_sigma=True, sigma=sigmasA2) xs = np.linspace(1.5,20) ax1.plot(xs,decay(xs,*poptA1),"tab:blue", label="fit $A_1/ \sigma_{A_1}$") ax1.plot(xs,decay(xs,*poptA2),"tab:orange", label="fit $A_2/ \sigma_{A_2}$") ax1.plot([1.5,20],[2,2], "r--", linewidth=1, alpha=0.5, label="$2\sigma$ confidence level") ax2.plot([1.5,20],[2,2], "r--", linewidth=1, alpha=0.5, label="$2\sigma$ confidence level") ax3.plot([1.5,20],[2,2], "r--", linewidth=1, alpha=0.5, label="$2\sigma$ confidence level") ax1.set_xlabel("T") ax1.set_ylabel("$A/ \sigma_{A}$") ax2.set_xlabel("T") ax2.set_ylabel("$A_1/ \sigma_{A_1}$") ax2.set_xlabel("T") ax2.set_ylabel("$A_2/ \sigma_{A_2}$") plt.show()

png

# when does A1 intersect the 2sigma confidence level? x0 = fsolve(lambda x: decay(x,*poptA1)-2, 16)[0] # error propagation for 2=A/(x0**tau) sigma_A, sigma_tau = np.sqrt(np.diag(pcovA1)) A, tau = poptA1 varx0 = (np.log(A/2)/(tau*np.log(tau)**2))**2 * sigma_tau**2 + \ (1/(A*np.log(tau)))**2 * sigma_A**2 sigmax0 = np.sqrt(varx0) display(Latex("We find that $T_c={:.0f} \pm {:.0f}.$".format(x0,sigmax0)))

We find that Tc=11Β±133.T_c=11 \pm 133.

It's difficult to estimate the correctness of our model when our uncertainties are this large, but even if we found a better model, the uncertainties are simply too large to meaningfully find TcT_c using this method. We get no information from this investigation, and we use our previous conclusion that TcT_c lies somewhere between 10K10K and 20K20K.

continue reading

Er det en Higgs, hr. Computer?
Hvorledes kan brugen af neurale netværk styrke og effektivisere klassifikation af Higgs partiklens eksistens?
Exploring The Universe With LHC
An interactive experience showcasing the LHC.