Hysteresis in Ferromagnetic Materials

We will investigate data collected by Sara Eisenhardt of the magnetisation curve of the ferromagnetic material LiHoF4\text{LiHoF}_4, and determine if there's hysteresis.

Basics

  • Importing libaries
  • Reading the data
  • Do initial processing, plot to verify.
import numpy as np import pandas as pd from scipy.optimize import curve_fit from scipy.stats import norm from IPython.display import display, Latex import matplotlib.pyplot as plt from matplotlib import animation, rc rc('animation', html='jshtml') # standard matplotlib settings def plt_rcParams(): plt.rcParams['font.size'] = 10 plt.rcParams['axes.labelsize'] = 10 plt.rcParams['axes.titlesize'] = 12 plt.rcParams['axes.linewidth'] = 1 plt.rcParams['lines.linewidth'] = 1.5 plt.rcParams['xtick.labelsize'] = 10 plt.rcParams['ytick.labelsize'] = 10 plt.rcParams['xtick.direction'] = 'in' plt.rcParams['ytick.direction'] = 'in' plt.rcParams['xtick.major.size'] = 8 plt.rcParams['xtick.minor.size'] = 5 plt.rcParams['ytick.major.size'] = 8 plt.rcParams['ytick.minor.size'] = 5 plt.rcParams['legend.fontsize'] = 8 plt.rcParams['legend.fancybox'] = True plt.rcParams['legend.frameon'] = True plt.rcParams['legend.framealpha'] = 0 plt_rcParams()
headers = 'Time,DC Field 1,DC Field 2,Temperature 1,Temperature 2,Frequency 1,AC Field 1,RealPart 1a,ImagPart 1a,RealPart 1b,ImagPart 1b,Frequency 2,AC Field 2,RealPart 2a,ImagPart 2a,RealPart 2b,ImagPart 2b'.split(",") B_col = headers[1] # T U_col = headers[8] # V U0 = 0.9305 * 10**(-6) # V df_pos = pd.read_csv("2010_01_0013.dat",skiprows=14,names=headers,sep=" ") df_neg = pd.read_csv("2010_01_0014.dat",skiprows=14,names=headers,sep=" ")

UU is shifted by U0U_0 above the x-axis, so we need to subtract U0U_0 to do a zero-point correction. We know from the assignment text that MM is proportional with U/BU/B, and since we don't care about the absolute value of MM, we choose to set the proportionality factor equal to 11.

B_pos = df_pos[B_col] U_pos = df_pos[U_col]-U0 B_neg = df_neg[B_col] U_neg = df_neg[U_col]-U0 M_pos = U_pos/B_pos M_neg = U_neg/B_neg

We verify that the B,UB,U data is sensible by plotting it below.

plt.plot(B_pos, U_pos, label="Positve") plt.plot(B_neg, U_neg, label="Negative") plt.ticklabel_format(axis="y", style="sci", scilimits=(0,0)) plt.title("Raw data (with zero point correction).") plt.xlabel("$B$ [T]") plt.ylabel("$U$ [V]") plt.legend() plt.show()

png

Next we want to plot the primary graph of the assignment (B,M)(B,M) to investigate possible hysterisis.

plt.plot(B_pos, M_pos, label="Positive") plt.plot(B_neg, M_neg, label="Negative") plt.xlim(-0.5,0.5) plt.ylim(-1e-6,1e-6) plt.ticklabel_format(axis="y", style="sci", scilimits=(0,0)) plt.title("Magnetisation: Hysteresis plot no rebinning") plt.xlabel("$B$ [T]") plt.ylabel("$\propto M$ [V/T]") plt.legend() plt.show()

png

While the data in the plot above do appear to have some of characteristics of Hysteresis, we have a lot of noise, especially around x=0x=0. To reduce the amount of noise we see in the plot, we rebin the data.

Rebinning

For our rebin method, we use a fixed number of items per bin controlled by the desired number of bins and total number of items in the dataset. We simply discard any straggling datapoints that may arise from the number of bins not being evenly divisible with the total number of datapoints. We do this because it's easier and won't effect our analysis which is focused around the middle of the datasets - not the last datapoints. Moreover, we will at most discard only one bin's worth of datapoints.

def rebin(xs,n): """ Helper function to facilitate rebinning. xs: Array to be rebinned n : Number of bins. """ # I purposefully don't care about any straggling datapoints # since they don't matter for the analysis anyway. N=len(xs) number_items_in_bin = int(N/n) values = [] uncertainties = [] for i in range(n): start_index = i*number_items_in_bin stop_index = start_index+number_items_in_bin # eqv.to (i+1)*number_items_in_bin subset = xs[start_index:stop_index] values.append(np.mean(subset)) # using error in the mean sigma/sqrt(n) uncertainties.append(np.std(subset,ddof=1)/np.sqrt(number_items_in_bin)) # (use n-1 to estimate variance) return np.array(values), np.array(uncertainties)
def gen_bin_plot(N_bins, fmt=".", alpha=1, ax=None): """ A helper function to quickly generate binned plots. """ B_pos_bin, B_pos_bin_sigma = rebin(B_pos, N_bins) M_pos_bin, M_pos_bin_sigma = rebin(M_pos, N_bins) B_neg_bin, B_neg_bin_sigma = rebin(B_neg, N_bins) M_neg_bin, M_neg_bin_sigma = rebin(M_neg, N_bins) if not ax: ax=plt.gca() ax.errorbar(B_pos_bin, M_pos_bin, xerr=B_pos_bin_sigma, yerr=M_pos_bin_sigma, fmt=fmt, alpha=alpha, label="Positive") ax.errorbar(B_neg_bin, M_neg_bin, xerr=B_neg_bin_sigma, yerr=M_neg_bin_sigma, fmt=fmt, alpha=alpha, label="Negative") ax.set_title(f"Hysteresis plot rebinned with {N_bins} bins.") ax.set_xlabel("$B$ [T]") ax.set_ylabel("$\propto M$ [V/T]") ax.set_xlim(-0.8,0.8) ax.set_ylim(-5e-7,5e-7) ax.ticklabel_format(axis="y", style="sci", scilimits=(0,0)) ax.legend() gen_bin_plot(200)

png

Determining optimal number of bins

def gen_anim(bins): """ A helper function to quickly generate animated bin plots. """ fig,ax = plt.subplots(1,1) pos = ax.plot([],[],label="Positive")[0] neg = ax.plot([],[],label="Negative")[0] def update(i): B_pos_bin, B_pos_bin_sigma = rebin(B_pos, bins[i]) M_pos_bin, M_pos_bin_sigma = rebin(M_pos, bins[i]) B_neg_bin, B_neg_bin_sigma = rebin(B_neg, bins[i]) M_neg_bin, M_neg_bin_sigma = rebin(M_neg, bins[i]) pos.set_data(B_pos_bin,M_pos_bin) neg.set_data(B_neg_bin,M_neg_bin) ax.set_title(f"Bins: {bins[i]}") return [pos,neg] ax.set_xlim(-1,1) ax.set_ylim(-1e-6,1e-6) plt.ticklabel_format(axis="y", style="sci", scilimits=(0,0)) ax.set_xlabel("B [T]") ax.set_ylabel("~M [V/T]") ax.legend() return animation.FuncAnimation(fig, update, frames=len(bins), interval=150, blit=True, repeat_delay=0)
%%capture bins = np.arange(10,3000,100) anim = gen_anim(bins)
# Note uncommenting the line below will display the animation # however it will take a not insignificant amount of time to run. # Hence the comment. anim

Looking at the animation above, we see that more than 500 bins is probably too much and introduces too much noise. So we would like to inspect the range from 10 to 500 in more detail.

%%capture bins = np.arange(10,500,10) anim = gen_anim(bins)
# Note uncommenting the line below will display the animation # however it will take a not insignificant amount of time to run. # Hence the comment. anim

It appears that around 200200 bins strike a balance between having enough points to analysis, having low enough uncertainties, and the plots being visually comprehensible. For the remaining of this assignment, we will use 200200 bins.

bins_optim = 200 # generate binned dataset for future reference B_pos_bin, B_pos_bin_sigma = rebin(B_pos, bins_optim) M_pos_bin, M_pos_bin_sigma = rebin(M_pos, bins_optim) B_neg_bin, B_neg_bin_sigma = rebin(B_neg, bins_optim) M_neg_bin, M_neg_bin_sigma = rebin(M_neg, bins_optim) # plot gen_bin_plot(bins_optim) plt.plot([-1,1],[0,0],"r--",linewidth=1, alpha=0.5, label="Axis Guide") plt.plot([0,0],[-1,1],"r--",linewidth=1,alpha=0.5) plt.legend()
<matplotlib.legend.Legend at 0x7fd104f1ead0>

png

Qualitatively, we see many of the same features as in figure 1 in the assignment. We see the two positve/negative curves forming two axal-mirrored s-curves which converge towards the same values for extreme BB values.

The qualitative analysis suggests that we do indeed see hysteresis.

Finding BCB_C

  1. Find data points around BCB_C
  2. Curvefit straight line
  3. Error propagate

In order to find BCB_C, we assume local linearity, and fit a straight line through points around y=0y=0 in case of the positive dataset. For the negative dataset, we have to fit points around a suspected BCB_C^- found from previous fits since there's a erronious point which is not a part of the line around (0,0)(0,0), and which increases our uncertainties.

We choose the optimal number of points as the number of points that gives us the fit with the lowest χ2/nr\chi^2/n_r statistic with

χ2=i(yif(xi))2σyi2\chi^2 = \sum_i \frac{(y_i - f(x_i))^2}{\sigma^2_{y_i}}

for a model ff and a dataset (xi,yi)(x_i,y_i), and nrn_r being the number of points minus the number of parameters in the model in this case 22.

(Sometimes, the above equation shows up as Math Processing Error when displayed in a Jupyter notebook on ERDA. However, if you let a Latex renderer render the equation, it displays just fine)

indices_pos = np.argsort(np.abs(M_pos_bin)) # choose points closets to y=0. # we can't do the same thing we did above where we took indices closets to y=0 # since we have a very imprecise point around (0,0) which ruins our fit. # Therefore we have to find points closets to a Bc_guess in the x-axis. Bc_neg_guess = -0.16039 # T, comes from previous fit indices_neg = np.argsort(np.abs(B_neg_bin-Bc_neg_guess)) # model used to fit straight_line = lambda x, a,b: a*x+b model_params = 2 Ns = np.arange(3,bins_optim) # start with 3 in order to be sure we can get a non-infinity covariance matrix. def find_chi2_n(pos): """ Helper function that finds the chi2 over reduced n """ chi2_n = [] if pos: B = B_pos_bin M = M_pos_bin M_sigma = M_pos_bin_sigma indices = indices_pos else: B = B_neg_bin M = M_neg_bin M_sigma = M_neg_bin_sigma indices = indices_neg for N in Ns: popt, pcov = curve_fit(straight_line, B[indices[:N]], M[indices[:N]], p0=[0,0], sigma=M_sigma[indices[:N]]) # compute chi2/nr r = M[indices[:N]] - straight_line(B[indices[:N]], *popt) chi2 = np.sum((r / M_sigma[indices[:N]]) ** 2) chi2_n.append(chi2/(N-model_params)) return chi2_n chi2_n_pos = find_chi2_n(pos=True) chi2_n_neg = find_chi2_n(pos=False) fig, (ax1, ax2) = plt.subplots(1,2,sharex=True, sharey=True, figsize=(10,4)) ax1.plot(Ns,chi2_n_pos) ax2.plot(Ns,chi2_n_neg) ax1.set_yscale("log") ax1.set_ylabel("$\chi^2/n_r$") ax1.set_xlabel("N") ax2.set_xlabel("N") ax1.set_title("$\chi^2/n_r$ graph for $M_p$") ax2.set_title("$\chi^2/n_r$ graph for $M_n$") fig.suptitle("$\chi^2/n_r$ log-graphs") plt.show()

png

# The optimal number of points is the number of points that gives us the smallest chi2/nr number_of_points_for_best_fit_pos = np.argmin(chi2_n_pos)+3 # +3 since we start with 3 points number_of_points_for_best_fit_neg = np.argmin(chi2_n_neg)+3 s_pos = f"Positive dataset: ${number_of_points_for_best_fit_pos}$ points around $y=0$ yields the best $\chi^2/n_r={np.round(np.min(chi2_n_pos),4)}$." s_neg = f"Negative dataset: ${number_of_points_for_best_fit_neg}$ points around $x=B_c^-$ yields the best $\chi^2/n_r={np.round(np.min(chi2_n_neg),4)}$." display(Latex(s_pos)) display(Latex(s_neg))

Positive dataset: 44 points around y=0y=0 yields the best χ2/nr=0.2088\chi^2/n_r=0.2088.

Negative dataset: 55 points around x=Bcx=B_c^- yields the best χ2/nr=0.1281\chi^2/n_r=0.1281.

Computing BcB_c and error propagation.

We have a straight line of the form y=ax+by=ax+b with fitted aa and bb along with their uncertainties. We can find the y=0y=0 crossing by

y=0x=Bc=bay=0 \Rightarrow x=B_c=-\frac{b}{a}

Error-propagating this function, we get

σBc2=(ba2)2σa2+(1a)2σb2\sigma_{B_c}^2 = \big( \frac{b}{a^2} \big)^2 \sigma_a ^2 + \big( -\frac{1}{a} \big)^2 \sigma_b^2

(Sometimes, the above equation shows up as Math Processing Error when displayed in a Jupyter notebook on ERDA. However, if you let a Latex renderer render the equation, it displays just fine)

def compute_bc(popt,pcov): # -b/a= x a,b = popt a_sigma = np.sqrt(pcov[0][0]) b_sigma = np.sqrt(pcov[1][1]) Bc = -b/a Bc_sigma = np.sqrt((b/(a**2))**2 * a_sigma**2 + (-1/a)**2 * b_sigma**2 ) return Bc, Bc_sigma

Below is an animation of how we can choose to fit a line through progressively more points around the area and their corresponding χ2/nr\chi^2/n_r and BcB_c estimates.

%%capture fig, (ax1, ax2) = plt.subplots(1,2,figsize=(15,6)) # Background plots ax1.plot([-1,1],[0,0],"r--",alpha=0.5) ax2.plot([-1,1],[0,0],"r--",alpha=0.5) gen_bin_plot(bins_optim,ax=ax1) gen_bin_plot(bins_optim,ax=ax2) # Animated plots setup points1 = ax1.plot([],[],"ro",label="Fit points")[0] line1 = ax1.plot([],[],"r",label='Fitted line')[0] points2 = ax2.plot([],[],"ro",label="Fit points")[0] line2 = ax2.plot([],[],"r",label='Fitted line')[0] ax1.legend() ax2.legend() bc_text1 = ax1.text(-0.6,2.8e-7,"") zero_crossing1 = ax1.plot([],[],"r--", alpha=0.5)[0] bc_text2 = ax2.text(-0.6,2.8e-7,"") zero_crossing2 = ax2.plot([],[],"r--", alpha=0.5)[0] chi2_text1 = ax1.text(-0.6,2.1e-7,"") chi2_text2 = ax2.text(-0.6,2.1e-7,"") ax1.set_title("Fitting to positive dataset.") ax2.set_title("Fitting to negative dataset.") def update(N): N=N+3 # We start at 3 points. # POSITIVE: # fit line popt_pos, pcov_pos =curve_fit(straight_line, B_pos_bin[indices_pos[:N]], M_pos_bin[indices_pos[:N]], p0=[0,0], sigma=M_pos_bin_sigma[indices_pos[:N]], absolute_sigma=True) # compute bc Bc_pos, Bc_pos_sigma = compute_bc(popt_pos,pcov_pos) bc_text1.set_text("$B_c^+=${:.5f}T$\pm${:.0E} T".format(Bc_pos,Bc_pos_sigma)) zero_crossing1.set_data([Bc_pos,Bc_pos],[-1,1]) # compute chi2/nr r_pos = M_pos_bin[indices_pos[:N]] - straight_line(B_pos_bin[indices_pos[:N]], *popt_pos) chi2_pos = np.sum((r_pos / M_pos_bin_sigma[indices_pos[:N]]) ** 2) chi2_text1.set_text("$\chi^2/n_r =${:.3f}".format(chi2_pos/N)) line1.set_data(np.linspace(-1,1),straight_line(np.linspace(-1,1),*popt_pos)) points1.set_data(B_pos_bin[indices_pos[:N]],M_pos_bin[indices_pos[:N]]) # NEGATIVE: # fit line popt_neg, pcov_neg =curve_fit(straight_line, B_neg_bin[indices_neg[:N]], M_neg_bin[indices_neg[:N]], p0=[0,0], sigma=M_neg_bin_sigma[indices_neg[:N]]) # compute bc Bc_neg, Bc_neg_sigma = compute_bc(popt_neg,pcov_neg) bc_text2.set_text("$B_c^-=${:.5f}T$\pm${:.0E} T".format(Bc_neg,Bc_neg_sigma)) zero_crossing2.set_data([Bc_neg,Bc_neg],[-1,1]) # compute chi2/nr: r_neg = M_neg_bin[indices_neg[:N]] - straight_line(B_neg_bin[indices_neg[:N]], *popt_neg) chi2_neg = np.sum((r_neg / M_neg_bin_sigma[indices_neg[:N]]) ** 2) chi2_text2.set_text("$\chi^2/n_r =${:.3f}".format(chi2_neg/N)) line2.set_data(np.linspace(-1,1),straight_line(np.linspace(-1,1),*popt_neg)) points2.set_data(B_neg_bin[indices_neg[:N]],M_neg_bin[indices_neg[:N]]) fig.suptitle(f"Line fitting to find $B_c$ using ${N}$ points.") return [line1,points1,bc_text1,zero_crossing1,chi2_text1, line2,points2,bc_text2,zero_crossing2,chi2_text2] anim = animation.FuncAnimation(fig, update, frames=80, interval=300, blit=True, repeat_delay=0)
anim

We see that the optimal number of points for the best fit is confirmed by the animation. We can now compute BcB_c and their uncertanties for both datasets using the methods we have already discussed. We find

# Fit the lines using optimal number of points found above. popt_pos, pcov_pos =curve_fit(straight_line, B_pos_bin[indices_pos[:number_of_points_for_best_fit_pos]], M_pos_bin[indices_pos[:number_of_points_for_best_fit_pos]], p0=[0,0], sigma=M_pos_bin_sigma[indices_pos[:number_of_points_for_best_fit_pos]]) popt_neg, pcov_neg =curve_fit(straight_line, B_neg_bin[indices_neg[:number_of_points_for_best_fit_neg]], M_neg_bin[indices_neg[:number_of_points_for_best_fit_neg]], p0=[0,0], sigma=M_neg_bin_sigma[indices_neg[:number_of_points_for_best_fit_neg]]) # Compute Bc and their uncertainties. Bc_pos, Bc_pos_sigma = compute_bc(popt_pos,pcov_pos) Bc_neg, Bc_neg_sigma = compute_bc(popt_neg,pcov_neg) display(Latex("$B_c^+={:.3f}T\pm{:.3f}T$".format(Bc_pos,Bc_pos_sigma))) display(Latex("$B_c^-={:.3f}T\pm{:.3f}T$".format(Bc_neg,Bc_neg_sigma)))

Bc+=0.172T±0.006TB_c^+=0.172T\pm0.006T

Bc=0.162T±0.006TB_c^-=-0.162T\pm0.006T

In case of hysteresis between the two datasets, we expect Bc++BcB_c^+ + B_c^- to be equal to 00 within the combined uncertainty σBc±\sigma_{B_c^\pm} assuming they are normally distributed, we get

Bc±=Bc++BcB_c^\pm = B_c^+ + B_c^-

σBc±2=σBc+2+σBc2\sigma_{B_c^\pm}^2 = \sigma_{B_c^+}^2 +\sigma_{B_c^-}^2

# Check that the two measurements of Bc agree with each other (up to the symmetry) # equivalent to (abs(Bc_pos)-abs(Bc_neg))=0 combined_mu = Bc_pos+Bc_neg combined_sigma = np.sqrt(Bc_pos_sigma**2 + Bc_neg_sigma**2) # How many sigmas does the combined_my deviate from the mean? sigma_from_mean = (combined_mu-0)/combined_sigma # Assuming normally distributed what is the probability that we would see more extreme data if they agreed? # is it less than 5%? p_data_more_extreme_than_combined_mu = (1-norm.cdf(sigma_from_mean))*2 # the factor two is because we want the two-sided probability. display(Latex("We find that $B_c^\pm$ is "+"${:.2f}$".format(sigma_from_mean)+"$\sigma_{B_c^\pm}$ from 0"+\ " which gives us a ${:.0f}\%$ chance to get a more extreme result (which is more than $5\%$).".format(p_data_more_extreme_than_combined_mu*100)))

We find that Bc±B_c^\pm is 1.251.25σBc±\sigma_{B_c^\pm} from 0 which gives us a 21%21\% chance to get a more extreme result (which is more than 5%5\%).

Since 1.25σBc±1.25\sigma_{B_c^\pm} is less than 2σBc±2\sigma_{B_c^\pm} we conclude that Bc+B_c^+ and BcB_c^- agree with each other. This supports our previous finding that we do se hysteresis.

However, we would ideally like to come up with a stronger quantative argument for hysteresis. One way of doing this is by checking the symmetry for all the points.

Attempt to quantitively show hysteresis

Let f(x)f(x) be a function that describes the positive dataset, and let g(x)g(x) be a function that describes the negative dataset, then we want to show that f(x)+g(x)=0f(x)+g(-x)=0 for all xx within the uncertainties.

In order to do this, we must know how to model f(x)f(x) and g(x)g(x), but we do not have a model for the datasets. However, if we look at the graph, it does resemble a sigmoid function, and since the material is ferromagnetic, we might find that the function is not differentiable around y=0y=0, so we will try to model the datasets using a single and two sigmoid functions (splitting at y=0y=0).

# sigmodial curve sigmoid = lambda x, *p: p[0] + p[1]* 1/(1+ p[2]*np.exp(-x*p[3])) def double_sigmoid(x, Bc, *p): mask = x<=Bc y = np.zeros(len(x)) # two sigmoids breaking at x=Bc (y=0) y[mask]= sigmoid(x[mask],*p[:4]) y[~mask]= sigmoid(x[~mask],*p[4:]) return y maxfev = 50000 p0 = [0,1,1,1, 0,1,1,1] # fitting double sigmoid model popt_pos_double, pcov_pos_double = curve_fit(lambda x,*p: double_sigmoid(x,Bc_pos,*p),B_pos_bin,M_pos_bin,p0,sigma=M_pos_bin_sigma, maxfev=maxfev) popt_neg_double, pcov_neg_double = curve_fit(lambda x,*p: double_sigmoid(x,Bc_neg,*p),B_neg_bin,M_neg_bin,p0,sigma=M_neg_bin_sigma, maxfev=maxfev) # fitting single sigmoid model popt_pos_single, pcov_pos_single = curve_fit(sigmoid,B_pos_bin,M_pos_bin,p0[:4],sigma=M_pos_bin_sigma, maxfev=maxfev) popt_neg_single, pcov_neg_single = curve_fit(sigmoid,B_neg_bin,M_neg_bin,p0[:4],sigma=M_neg_bin_sigma, maxfev=maxfev)
fig, (ax1, ax2) = plt.subplots(1,2,sharex=True, sharey=True, figsize=(12,5)) gen_bin_plot(bins_optim,ax=ax1,alpha=0.3) gen_bin_plot(bins_optim,ax=ax2,alpha=0.3) xs = np.linspace(-1,1,1000) # single sigmoid ax1.plot(xs,sigmoid(xs,*popt_pos_single),c="tab:blue", label="Single sigmoid fit") ax1.plot(xs,sigmoid(xs,*popt_neg_single),c="tab:orange", label="Single sigmoid fit") ax1.plot([-1,1],[0,0],"r--",linewidth=1, alpha=0.5) ax1.legend() ax1.set_title("Single sigmoid fitting") # double sigmoid ax2.plot(xs,double_sigmoid(xs,Bc_pos,*popt_pos_double),c="tab:blue", label="Double sigmoid fit") ax2.plot(xs,double_sigmoid(xs,Bc_neg,*popt_neg_double),c="tab:orange", label="Double sigmoid fit") ax2.plot([-1,1],[0,0],"r--",linewidth=1, alpha=0.5) ax2.legend() ax2.set_title("Double sigmoid fitting") fig.suptitle("Fitting models for the datasets") plt.show()

png

We see that the single sigmoid function doesn't appear to capture the data very well the curve appears to be too gentle. The double sigmoid appears to do better, but has a dicontinuity around the split point which is unexpected.

To better understand how well the models fit the data, we can look at the residual plots where the residual rir_i is defined as

ri=yif(xi)σyir_i = \frac{y_i-f(x_i)}{\sigma_{y_i}}

for model ff and a dataset (xi,yi)(x_i,y_i).

### residual plot ### ## computing the residuals. # double residuals_double_pos = M_pos_bin-double_sigmoid(B_pos_bin,Bc_pos,*popt_pos_double) residuals_double_normed_pos = residuals_double_pos / M_pos_bin_sigma residuals_double_neg = M_neg_bin-double_sigmoid(B_neg_bin,Bc_neg,*popt_neg_double) residuals_double_normed_neg = residuals_double_neg / M_neg_bin_sigma # single residuals_single_pos = M_pos_bin-sigmoid(B_pos_bin,*popt_pos_single) residuals_single_normed_pos = residuals_single_pos / M_pos_bin_sigma residuals_single_neg = M_neg_bin-sigmoid(B_neg_bin,*popt_neg_single) residuals_single_normed_neg = residuals_single_neg / M_neg_bin_sigma fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5)) # single sigmoid ax1.scatter(B_pos_bin, residuals_single_normed_pos, marker=".", label="Positive Residual") ax1.scatter(B_neg_bin, residuals_single_normed_neg, marker=".", label="Negative Residual") ax1.set_title("Residual plot for sigmoid fit.") ax1.set_xlabel("B [V]") ax1.set_ylabel("Residual [$\sigma$]") ax1.legend(loc="lower left") ax1.set_xlim(-0.8,0.8) ax1.set_ylim(-10,10) ax1.grid() # double sigmoid ax2.scatter(B_pos_bin, residuals_double_normed_pos, marker=".", label="Positive Residual") ax2.scatter(B_neg_bin, residuals_double_normed_neg, marker=".", label="Negative Residual") ax2.set_title("Residual plot for double sigmoid fit.") ax2.set_xlabel("B [V]") ax2.set_ylabel("Residual [$\sigma$]") ax2.legend(loc="lower left") ax2.set_xlim(-0.8,0.8) ax2.set_ylim(-10,10) ax2.grid() fig.suptitle("Residual Plots") plt.show()

png

We see that both models have very high residual errors. Moreover, they appear to not be randomly scattered, and have systematic errors. This suggests that the models we have used for the fits are not correct, and don't properly capture the data.

However, for completeness, we will check the symmetry

fig, (ax1, ax2) = plt.subplots(1,2, sharex=True, sharey=True, figsize=(12,5)) # f(x)+g(-x) sigmoid_sym = sigmoid(xs,*popt_pos_single)+sigmoid(-xs,*popt_neg_single) double_sigmoid_sym = double_sigmoid(xs,Bc_pos,*popt_pos_double)+double_sigmoid(-xs,Bc_neg,*popt_neg_double) ax1.plot(xs, sigmoid_sym, label="$f(x)+g(-x)$") ax2.plot(xs, double_sigmoid_sym, label="$f(x)+g(-x)$") ax1.set_title("Symmetry test for sigmoid fit") ax2.set_title("Symmetry test for double sigmoid fit") ax1.legend() ax2.legend() ax1.set_xlabel("B [T]") ax1.set_ylabel("~M [V/T]") ax2.set_xlabel("B [T]") ax1.set_ylim(-5e-7,5e-7) plt.ticklabel_format(axis="y", style="sci", scilimits=(0,0))

png

Visually, both models appear to be approximately symmetric. The dip in the double sigmoid fit is probably due to slight differences in where BcB_c is located for the negative and positive dataset which puts the split at slightly different places.

We will now try to fit a line to both of these plots and check that the bias is 00 within the uncertainty. However, even if that's the case, it won't actually tell us much since the models don't describe the data very well as seen in the residual plots.

# fit straight lines p0 = [0,0] popt_line_single, pcov_line_single = curve_fit(straight_line, xs,sigmoid_sym,p0) popt_line_double, pcov_line_double = curve_fit(straight_line, xs,double_sigmoid_sym,p0) single_bias = popt_line_single[1] single_bias_uncertainty = np.sqrt(pcov_line_single[1][1]) double_bias = popt_line_double[1] double_bias_uncertainty = np.sqrt(pcov_line_double[1][1]) display(Latex("Single Sigmoid bias $={:.1E} V/T\pm{:.0E} V/T$ which is ${:.2f}\sigma$ away from zero.".format(single_bias,single_bias_uncertainty,single_bias/single_bias_uncertainty))) display(Latex("Double Sigmoid bias $={:.0E} V/T\pm{:.0E} V/T$ which is ${:.2f}\sigma$ away from zero.".format(double_bias,double_bias_uncertainty,double_bias/double_bias_uncertainty)))

Single Sigmoid bias =6.2E10V/T±7E11V/T=6.2E-10 V/T\pm7E-11 V/T which is 9.01σ9.01\sigma away from zero.

Double Sigmoid bias =6E10V/T±3E10V/T=6E-10 V/T\pm3E-10 V/T which is 2.44σ2.44\sigma away from zero.

As we saw in the graph, the double sigmoid model describes the data better than the single sigmoid model, however the bias is still greater than 2σ2\sigma away from zero.

However, the uncertainty of the double sigmoid bias is likely overestimated since we didn't use the mean of the two BcB_c estimates. We didn't do that since we thought it would be circular reasoning to assume symmetry when testing for symmetry. But as mentioned above, neither model is a particular good fit, and both have systematic biases, so this argument doesn't tell us much about if we see Hysteresis.

Another way of modelling the data which we have not done in this assignment is to extend the linaer fit method we used to find BcB_c, but do so for every point. This way we only assume local linearity.

We will therefore use our previous arguments: The graphs looks qualitatively to exhibit hysteresis, and there is symmetry between the Bc+B_c^+ and BcB_c^- estimates.

Analysis of the neutral 2010_01_0012 dataset

Finally, we look at the neutral dataset to try to confirm that we do not see hysteresis.

df_neu = pd.read_csv("2010_01_0012.dat",skiprows=14,names=headers,sep=" ") B_neu = df_neu[B_col] U_neu = df_neu[U_col]-U0 M_neu = (U_neu)/B_neu B_neu_bin, B_neu_bin_sigma = rebin(B_neu, bins_optim) M_neu_bin, M_neu_bin_sigma = rebin(M_neu, bins_optim)
gen_bin_plot(bins_optim) plt.errorbar(B_neu_bin, M_neu_bin,yerr=M_neu_bin_sigma, xerr=B_neu_bin_sigma, fmt=".", label="Neutral") plt.legend()
<matplotlib.legend.Legend at 0x7fd10498fad0>

png

We clearly see that the 2010_01_0012 neutral dataset doesn't line up with hysteresis for either of the other datasets as expected.

continue reading

Magnetism in Superconductors

In this project, we will analyse data collected by Ana-Elena Tutueanu where w...

Characterizing Exoplanet Interiors
We show that we are able to constrain the material composition of exoplanets given their mass and radius.