master-thesis/python/energy_flow_proper/03_gaussian/analytic_calculation.org

508 lines
13 KiB
Org Mode
Raw Normal View History

#+PROPERTY: header-args :session gaussian_analytic :kernel python :pandoc yes :async yes
2021-11-26 14:29:08 +01:00
In this notebook, we compare the analytic solution with the numeric
result. A nicer version with the code outsourced into a proper python
module can be found [[file:analytic_calculation_with_gaussflow.org][here]].
#+begin_src jupyter-python :results none
import scipy
import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
import sys
import utilities
import quadpy
from hops.util import bcf
import itertools
#+end_src
* System Set-Up
#+begin_src jupyter-python :results none
Ω = 1.5
A = np.array([[0, Ω], [-Ω, 0]])
η = 2
ω_c = 1
t_max = 25
def α(t):
return η / np.pi * (ω_c / (1 + 1j * ω_c * t)) ** 2
def α_0_dot(t):
return 2 * η / (1j * np.pi) * (ω_c / (1 + 1j * ω_c * t)) ** 3
#+end_src
* Exponential Fit
First we need an exponential fit for our BCF.
#+begin_src jupyter-python
W, G_raw = utilities.fit_α(α, 3, 80, 10_000)
L, P = utilities.fit_α(α_0_dot, 5, 80, 10_000)
#+end_src
#+RESULTS:
Looks quite good.
#+begin_src jupyter-python
2021-11-24 19:16:24 +01:00
τ = np.linspace(0, t_max, 1000)
with utilities.hiro_style():
fig, ax = utilities.plot_complex(τ, α(τ), label="exact")
utilities.plot_complex(
τ, utilities.α_apprx(τ, G_raw, W), ax=ax, label="fit", linestyle="--"
)
#+end_src
#+RESULTS:
2021-11-24 19:16:24 +01:00
[[file:./.ob-jupyter/9c454eedaf97fa770850f6d4948856df8948b8df.svg]]
2021-11-24 19:16:24 +01:00
#+begin_src jupyter-python
τ = np.linspace(0, -t_max, 1000)
with utilities.hiro_style():
fig, ax = utilities.plot_complex(τ, α(τ), label="exact")
utilities.plot_complex(
τ, utilities.α_apprx(-τ, G_raw.conj(), W.conj()), ax=ax, label="fit", linestyle="--"
)
#+end_src
#+RESULTS:
[[file:./.ob-jupyter/37cc788143d55668dd7087df2e59b1e330223895.svg]]
#+begin_src jupyter-python
fig, ax = utilities.plot_complex(τ, α_0_dot(τ), label="exact")
utilities.plot_complex(
τ, utilities.α_apprx(τ, P, L), ax=ax, label="fit", linestyle="--"
)
#+end_src
#+RESULTS:
:RESULTS:
| hline | <AxesSubplot:> |
2021-11-24 19:16:24 +01:00
[[file:./.ob-jupyter/666257d29aba72e31b8e20527e5cf43a8c475808.svg]]
:END:
* Analytical Solution
** Calculate the Magic Numbers
We begin with the $\varphi_n$ and $G_n$ from the original ~G~.
#+begin_src jupyter-python
φ = -np.angle(G_raw)
φ
#+end_src
#+RESULTS:
: array([-1.18710245, 0.64368323, 2.11154332])
#+begin_src jupyter-python
G = np.abs(G_raw)
G
#+end_src
#+RESULTS:
: array([0.51238635, 0.62180167, 0.10107935])
Now we calculate the real and imag parts of the ~W~ parameters and
call them $\gamma_n$ and $\delta_n$.
#+begin_src jupyter-python
γ, δ = W.real, W.imag
#+end_src
#+RESULTS:
Now the \(s_n, c_n\).
#+begin_src jupyter-python
s, c = np.sin(φ), np.cos(φ)
#+end_src
#+RESULTS:
Now we calculate the roots of $f_n(z)=-G_n ((z+\gamma_n) s_n + \delta_n
c_n)$.
Normally we should be vary of one of the \(\deltas\) being zero, but
this is not the case.
#+begin_src jupyter-python
roots_f = -(γ + δ * c/s)
roots_f
#+end_src
#+RESULTS:
: array([-1.19725058, -2.0384181 , -0.23027627])
Now the \(z_k\) the roots of \(\delta_k^2 + (\gamma_k + z)^2\). *We
don't include the conjugates.*
#+begin_src jupyter-python
roots_z = -W
roots_z
#+end_src
#+RESULTS:
: array([-2.29230292-2.71252483j, -1.07996581-0.719112j ,
: -0.28445344-0.09022829j])
** Construct the Polynomials
#+begin_src jupyter-python
from numpy.polynomial import Polynomial
#+end_src
#+RESULTS:
We later need \(f_0(z) = \prod_k (z-z_k) (z-z_k^{\ast})\).
#+begin_src jupyter-python
f_0 = utilities.poly_real(Polynomial.fromroots(np.concatenate((roots_z, roots_z.conj()))))
f_0
#+end_src
#+RESULTS:
Another polynomial is simply \(p_1 = (z^2 + \Omega^2)\prod_k (z-z_k)
(z-z_k^{\ast})\) and we can construct it from its roots.
#+begin_src jupyter-python
p_1 = Polynomial([Ω**2, 0, 1]) * f_0
p_1
#+end_src
#+RESULTS:
The next ones are given through \(q_n=\Omega f_n(z) \prod_{k\neq
n}(z-z_k) (z-z_k^{\ast})\)
#+begin_src jupyter-python
q = [
-G_c
,* Ω * s_c
,* utilities.poly_real(Polynomial.fromroots(
np.concatenate(
(
[root_f],
utilities.except_element(roots_z, i),
utilities.except_element(roots_z, i).conj(),
)
)
))
for root_f, G_c, γ_c, δ_c, s_c, c_c, i in zip(roots_f, G, γ, δ, s, c, range(len(c)))
]
#+end_src
#+RESULTS:
With this we construct our master polynomial \(p = p_1 + \sum_n q_n\).
#+begin_src jupyter-python
p = p_1 + sum(q)
p
#+end_src
#+RESULTS:
And find its roots.
#+begin_src jupyter-python
master_roots = p.roots()
master_roots
#+end_src
#+RESULTS:
: array([-2.28139877-2.68284887j, -2.28139877+2.68284887j,
: -0.93979297-0.62025112j, -0.93979297+0.62025112j,
: -0.24204514-0.10390896j, -0.24204514+0.10390896j,
: -0.19348529-1.49063362j, -0.19348529+1.49063362j])
Let's see if they're all unique. This should make things easier.
#+begin_src jupyter-python
np.unique(master_roots).size == master_roots.size
#+end_src
#+RESULTS:
: True
Very nice!
** Calculate the Residuals
These are the prefactors for the diagonal.
#+begin_src jupyter-python
R_l = f_0(master_roots) / p.deriv()(master_roots)
R_l
#+end_src
#+RESULTS:
: array([ 0.00251589-0.00080785j, 0.00251589+0.00080785j,
: 0.06323421+0.02800137j, 0.06323421-0.02800137j,
: 0.02732669-0.00211665j, 0.02732669+0.00211665j,
: -0.09307679+0.36145133j, -0.09307679-0.36145133j])
And the laplace transform of \(\alpha\).
#+begin_src jupyter-python
def α_tilde(z):
return (
-G[None, :]
,* ((z[:, None] + γ[None, :]) * s[None, :] + δ[None, :] * c[None, :])
/ (δ[None, :] ** 2 + (γ[None, :] + z[:, None]) ** 2)
).sum(axis=1)
#+end_src
#+RESULTS:
And these are for the most compliciated element.
#+begin_src jupyter-python
R_l_21 = (Ω + α_tilde(master_roots))* f_0(master_roots) / p.deriv()(master_roots)
R_l_21
#+end_src
#+RESULTS:
: array([-0.00325014-0.02160514j, -0.00325014+0.02160514j,
: 0.00074814-0.05845205j, 0.00074814+0.05845205j,
: -0.00094159-0.00084894j, -0.00094159+0.00084894j,
: 0.00344359+0.56219924j, 0.00344359-0.56219924j])
Now we can calculate \(G\).
#+begin_src jupyter-python
def G_12_ex(t):
t = np.asarray(t)
t_shape = t.shape
if len(t.shape) == 0:
t = t.reshape((1,))
return Ω * (R_l[None, :] * np.exp(t[:, None] * master_roots[None, :])).real.sum(
axis=1
).reshape(t_shape)
def G_11_ex(t):
t = np.asarray(t)
t_shape = t.shape
if len(t.shape) == 0:
t = np.array([t])
return (
R_l[None, :]
,* master_roots[None, :]
,* np.exp(t[:, None] * master_roots[None, :])
).real.sum(axis=1).reshape(t_shape)
def G_21_ex(t):
t = np.asarray(t)
return -(R_l_21[None, :] * np.exp(t[:, None] * master_roots[None, :])).real.sum(
axis=1
)
def G_21_ex_alt(t):
t = np.asarray(t)
return (
R_l[None, :]
,* master_roots[None, :] ** 2
,* np.exp(t[:, None] * master_roots[None, :])
).real.sum(axis=1) / Ω
def G_ex(t):
t = np.asarray(t)
if t.size == 1:
t = np.array([t])
diag = G_11_ex(t)
return (
np.array([[diag, G_12_ex(t)], [G_21_ex(t), diag]]).swapaxes(0, 2).swapaxes(1, 2)
)
def G_ex_new(t):
t = np.asarray(t)
if t.size == 1:
t = np.array([t])
g_12 = R_l[None, :] * np.exp(t[:, None] * master_roots[None, :])
diag = master_roots[None, :] * g_12
g_21 = master_roots[None, :] * diag / Ω
return (
np.array([[diag, g_12 * Ω], [g_21, diag]])
.real.sum(axis=3)
.swapaxes(0, 2)
.swapaxes(1, 2)
)
#+end_src
#+RESULTS:
#+begin_src jupyter-python
2021-11-24 19:16:24 +01:00
with utilities.hiro_style():
plt.plot(τ, G_ex_new(τ).reshape(len(τ), 4))
#+end_src
#+RESULTS:
2021-11-24 19:16:24 +01:00
[[file:./.ob-jupyter/43540f787051db4b09fa82a1204956050edca117.svg]]
* Energy Flow
We adopt the terminology from my notes.
2021-11-24 19:16:24 +01:00
#+begin_src jupyter-python :results none
A = G_11_ex
B = G_12_ex
Pk = P
Bk = R_l * Ω
Gk = G_raw
Lk = L
Ck = -master_roots # Mind the extra sign!
Wk = W
Ak = master_roots * R_l
n = 1
q_s_0 = 1 + 2 * n
p_s_0 = 1 + 2 * n
qp_0 = 1j
#+end_src
2021-11-24 19:16:24 +01:00
First, we calculate the coefficient matrices.
#+begin_src jupyter-python :results none
Γ1 = (Bk[:, None] * Pk[None, :]) / (Lk[None, :] - Ck[:, None])
Γ2 = (Bk[:, None] * Gk[None, :]) / (Ck[:, None] - Wk[None, :])
Γ3 = (Bk[:, None] * Gk.conj()[None, :]) / (Ck[:, None] + Wk.conj()[None, :])
ΓA = (Ak[:, None] * Pk[None, :]) / (Lk[None, :] - Ck[:, None])
#+end_src
2021-11-24 19:16:24 +01:00
The convolution free part is easy to calculate.
#+begin_src jupyter-python :results none
def A_conv(t):
t = np.asarray(t)
result = np.zeros_like(t, dtype="complex128")
for (n, m) in itertools.product(range(len(Ak)), range(len(Pk))):
result += ΓA[n, m] * (np.exp(-Ck[n] * t) - np.exp(-Lk[m] * t))
return result
def B_conv(t):
t = np.asarray(t)
result = np.zeros_like(t, dtype="complex128")
for (n, m) in itertools.product(range(len(Bk)), range(len(Pk))):
result += Γ1[n, m] * (np.exp(-Ck[n] * t) - np.exp(-Lk[m] * t))
return result
def flow_conv_free(t):
a, b = A(t), B(t)
ac, bc = A_conv(t), B_conv(t)
return (
-1
/ 2
,* (
q_s_0 * a * ac + p_s_0 * b * bc + qp_0 * a * bc + qp_0.conjugate() * b * ac
).imag
)
#+end_src
2021-11-24 19:16:24 +01:00
The part containing the convolution with the BCF is a bit mode
involved. The first version originates from my calculations and the
second one arose from sagemath. Thank God! They do aggree now.
#+begin_src jupyter-python :results none
def conv_part(t):
t = np.asarray(t)
result = np.zeros_like(t, dtype="float")
2021-11-24 19:16:24 +01:00
for (m, k, n, l) in itertools.product(
range(len(Bk)), range(len(Pk)), range(len(Bk)), range(len(Gk))
):
g_1_2 = (
2021-11-24 19:16:24 +01:00
Γ1[m, k]
,* Γ2[n, l]
,* (
2021-11-24 19:16:24 +01:00
(1 - np.exp(-(Ck[m] + Wk[l]) * t)) / (Ck[m] + Wk[l])
+ (np.exp(-(Ck[m] + Ck[n]) * t) - 1) / (Ck[m] + Ck[n])
+ (np.exp(-(Lk[k] + Wk[l]) * t) - 1) / (Lk[k] + Wk[l])
+ (1 - np.exp(-(Lk[k] + Ck[n]) * t)) / (Lk[k] + Ck[n])
)
)
g_1_3 = (
2021-11-24 19:16:24 +01:00
Γ1[m, k]
,* Γ3[n, l]
,* (
2021-11-24 19:16:24 +01:00
(1 - np.exp(-(Ck[m] + Ck[n]) * t)) / (Ck[m] + Ck[n])
- (1 - np.exp(-(Lk[k] + Ck[n]) * t)) / (Lk[k] + Ck[n])
- (np.exp(-(Ck[n] + Wk[l].conj() * t)) - np.exp(-(Ck[m] + Ck[n]) * t))
/ (Ck[m] - Wk[l].conj())
+ (np.exp(-(Ck[n] + Wk[l].conj() * t)) - np.exp(-(Lk[k] + Ck[n]) * t))
/ (Lk[k] - Wk[l].conj())
)
)
result += -1 / 2 * (g_1_2.imag + g_1_3.imag)
return result
2021-11-24 19:16:24 +01:00
import monster_formula
def conv_part_sage(t):
t = np.asarray(t)
result = np.zeros_like(t, dtype="float")
for (n, k, l, m) in itertools.product(
range(len(Bk)), range(len(Pk)), range(len(Gk)), range(len(Bk))
):
result += (
-1
/ 2
,* (
monster_formula.conv_part(
t,
Pk[k],
Lk[k],
Bk[n],
Ck[n],
Bk[m],
Ck[m],
Gk[l],
Wk[l],
Gk[l].conj(),
Wk[l].conj(),
)
).imag
)
return result
#+end_src
The contribution of the convolution part is pretty minor for zero
temperature.
#+begin_src jupyter-python
τ = np.linspace(0, t_max, 5000)
plt.plot(τ, conv_part(τ))
#+end_src
#+RESULTS:
:RESULTS:
2021-11-24 19:16:24 +01:00
| <matplotlib.lines.Line2D | at | 0x7f85099ac190> |
[[file:./.ob-jupyter/ed8ce474b528ddf7104399b40e018db0180b3ee5.svg]]
:END:
Now we can define the flow:
2021-11-24 19:16:24 +01:00
#+begin_src jupyter-python :results none
def flow(t):
return flow_conv_free(t) + conv_part(t)
#+end_src
2021-11-24 19:16:24 +01:00
And plot it against the numerically obtained flow.
#+begin_src jupyter-python
2021-11-24 19:16:24 +01:00
with open('res.npy', 'rb') as f:
old_τ = np.load(f)
old_flow_τ = np.load(f)
with utilities.hiro_style():
plt.plot(old_τ, flow(old_τ))
plt.plot(old_τ, old_flow_τ)
#+end_src
#+RESULTS:
2021-11-24 19:16:24 +01:00
[[file:./.ob-jupyter/6c77bc1a9d844fa3e33a14c077310c32a7eec4a8.svg]]
They do aggree but I think the accuracy of the numerical integration
is getting worse with time.
#+begin_src jupyter-python
2021-11-24 19:16:24 +01:00
scipy.integrate.quad(flow, 0, 20)
#+end_src
#+RESULTS:
2021-11-24 19:16:24 +01:00
| 1.6004236143630652 | 1.564508138231918e-08 |
Integresting... maybe the fact that it does not converge to zero
exactly has to do with this...