Welcome to pyProximation’s documentation!

Contents:

Introduction

This is a brief documentation for using pyProximation. pyProximation was originally written to solve systems of integro-differential equations via collocation method in arbitrary number of variables. The current implementation of the method is based on a finite dimensional orthogonal system of functions.

Requirements and dependencies

Since this is a python package, so clearly one needs have python available. pyProximation relies on the following packages:

For the symbolic computation, pyProximation need the presence of either sympy or sage.

Download

pyProximation can be obtained from https://github.com/mghasemi/pyProximation.

Installation

To install pyProximation, run the following in terminal:

sudo python setup.py install

Documentation

The documentation of pyProximation is prepaqred via sphinx.

License

pyProximation is distributed under MIT license:

MIT License

Copyright (c) 2016 Mehdi Ghasemi

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Symbolic toolkits

The ‘pyProximation’ library performs symbolic computations as well as numerical calculations. To perform numerical calculations, it relies on the excelent libraries ‘NumPy’ and ‘SciPy’. There are various toolkits that are able to handel symbolic computations and each one is suitable for certain purposes. ‘pyProximation’ takes advantage of three different toolkits:

  1. sympy
  2. sage
  3. symengine

SymPy

SymPy is a Python library for symbolic computation. It provides computer algebra capabilities either as a standalone application, as a library to other applications, or live on the web as SymPy Live or SymPy Gamma. SymPy is trivial to install and to inspect because is written entirely in Python with few dependencies. This ease of access combined with a simple and extensible code base in a well known language make SymPy a computer algebra system with a relatively low barrier to entry.

SymPy includes features ranging from basic symbolic arithmetic to calculus, algebra, discrete mathematics and quantum physics. It is capable of formatting the result of the computations as LaTeX code.

SymPy is free software and is licensed under New BSD License. The lead developers are Ondřej Čertík and Aaron Meurer.

Usage

Sympy can be imported by:

from sympy import *

To introduce a symbolic variable called x, use:

x = Symbol('x')

and to define a symbolic function y depending in symbolic variables x, t, use:

y = Function('y')(x, t)

To introduce an equation like \(y(x,t)=xt\), use Eq:

equation = Eq(y, x*t)

For differentiation of y with respect to x of order n, i.e. \(\frac{\partial^ny}{\partial x^n}\), enter:

diff(y, x, n)

For integration of y with respect to x, i.e., \(\int y dx\), write:

integrate(y, x)

and for definite integral \(\int_a^b y dx\) write:

integrate(y, (x, a, b))

Sage

SageMath (previously Sage or SAGE, System for Algebra and Geometry Experimentation) is a free open-source mathematics software system licensed under the GPL with features covering many aspects of mathematics, including algebra, combinatorics, numerical mathematics, number theory, and calculus. It builds on top of many existing open-source packages: NumPy, SciPy, matplotlib, Sympy, Maxima, GAP, FLINT, R and many more. Access their combined power through a common, Python-based language or directly via interfaces or wrappers.

Usage

When present, Sage can be imported by:

from sage.all import *

To introduce a symbolic variable called x, use:

x = var('x')

and to define a symbolic function y depending in symbolic variables x, t, use:

y = function('y')(x, t)

To introduce an equation like \(y(x,t)=xt\), use ==:

equation = (y == x*t)

For differentiation of y with respect to x of order n, i.e. \(\frac{\partial^ny}{\partial x^n}\), enter:

diff(y, x, n)

For integration of y with respect to x, i.e., \(\int y dx\), write:

integral(y, x)

and for definite integral \(\int_a^b y dx\) write:

integral(y, (x, a, b))

SymEngine

SymEngine is a standalone fast C++ symbolic manipulation library. Optional thin wrappers allow usage of the library from other languages, e.g.:

  • C wrappers allow usage from C, or as a basis for other wrappers;
  • Python wrappers allow easy usage from Python and integration with SymPy and Sage;
  • Ruby wrappers;
  • Julia wrappers;
  • Haskell wrappers.

It is licensed under MIT license.

Usage

SymEngine can be imported by:

from symengine import *

To introduce a symbolic variable called x, use:

x = Symbol('x')

and to define a symbolic function y depending in symbolic variables x, t, use:

y = function_symbol('y', x, t)

SymEngine does not have specific command for defining an equation, so work with equations, one should always equalities with 0. To introduce an equation like \(y(x,t)=xt\), use:

equation = y - x*t

For differentiation of y with respect to x, i.e. \(\frac{\partial y}{\partial x}\), enter:

diff(y, x)

To achieve a certain order, repeat the above code as many times as necessary.

The current version of SymEngine does not support integration.

Measures

Density function

The pyProximation implements two different scenarios for measure spaces:
  1. Continuous case, where support of the measure is given as a compact subspace (box) of \(\mathbb{R}^n\), and
  2. Discrete case, where a finite set of points and their weights are given.

Continuous measure spaces

For the continuous case, generally assume that the support of the measure is a product of closed interval, i.e., \(\prod_{i=1}^{n}[a_i, b_i]\), where for each i, \(a_i<b_i\). Such a set can be defined as a list of ordered pairs of real numbers as [(a1, b1), (a2, b2), ..., (an, bn)]. Moreover, when we speak about a subset of the support, we always refer to a box, defined as a list of 2-tuples.

In this case, the measure is given implicitly as a density function \(w(x)\). So the measure of a set S is given by

\[\mu(S) = \int_S w(x)dx.\]

For example, the following code defines the Lebesgue measure on \([-1, 1]\times[-1, 1]\) and finds the measure of the set \([0, 1]\times[-1, 0]\):

# import the Measure class
from pyProximation import Measure
# define the support of the measure
D = [(-1, 1), (-1, 1)]
# define the measure with the constant density 1
M = Measure(D, 1)
# define a set called S
S = [(0, 1), (-1, 0)]
# find the measure of the set S
print M.measure(S)

Discrete measure spaces

In this case, the measure is basically a convex combination of Dirac measure. Given a set \(X=\{x_1, \dots, x_n\}\) and corresponding non-negative weights \(w_1,\dots, w_n\), one defines a measure as \(\mu = \sum_{i=1}^n w_i \delta_{x_i}\). Then the measure of a subset \(S=\{x_{i_1},\dots,x_{i_k}\}\) of X is given by

\[\mu(S) = \int_S d\mu = \sum_{j=1}^k w_{i_j}\]

The following is a sample code for discrete case:

# import the Measure class
from pyProximation import Measure
# define the support and density
D = {'x1':1, 'x2':.5, 'x3':1.1, 'x4':.6}
# define the measure
M = Measure(D)
# define a set called S
S = ['x2', 'x3']
# find the measure of the set S
print M.measure(S)

Integrals

Suppose that a measure space \((X, \mu)\) and a measurable function f on X are given. The method integral computes \(\int_X fd\mu\). If \(\mu\) is discrete, then f can be a dictionary with keys as points of domain and values as evaluation at each point. Otherwise, f is simply a numerical function:

from pyProximation import Measure
from numpy import sqrt
# define the density function
w = lambda x:1./sqrt(1.-x**2)
# define the support
D = [(-1, 1)]
# initiate the measure space
M = Measure(D, w)
# set f(x) = x^2
f = lambda x: x**2
# integrate f(x) w.r.t. w(x)
print M.integral(f)

Or in two dimensions:

from pyProximation import Measure
from numpy import sqrt
# define the density function
w = lambda x, y:y**2/sqrt(1.-x**2)
# define the support
D = [(-1, 1), (-1, 1)]
# initiate the measure space
M = Measure(D, w)
# set f(x, y) = x^2 + y
f = lambda x, y: x**2 + y
# integrate f(x, y) w.r.t. w(x, y)
print M.integral(f)

p-norms

Given a measure space \((X, \mu)\) and a measurable function f, the p-norm of f, for a positive p is defined as:

\[\| f \|_p =\left(\int_{X} |f|^p d\mu\right)^{1/p}.\]

The method norm(p, f) calculates the above quantity.

Drawing samples

Suppose that \((X, \mu)\) is a measure space and one wishes to draw a sample of size n from X according to the distribution \(\mu\). This can be done by the method sample(n) which returns a list of n random points from the support, according to \(\mu\).

Hilbert Spaces

Orthonormal system of functions

Let X be a topological space and \(\mu\) be a finite Borel measure on X. The bilinear function \(\langle\cdot,\cdot\rangle\) defined on \(L^2(X, \mu)\) as \(\langle f, g\rangle = \int_X fg d\mu\) is an inner product which turns \(L^2(X, \mu)\) into a Hilbert space.

Let us denote the family of all continuous real valued functions on a non-empty compact space X by \(\textrm{C}(X)\). Suppose that among elements of \(\textrm{C}(X)\), a subfamily A of functions are of particular interest. Suppose that A is a subalgebra of \(\textrm{C}(X)\) containing constants. We say that an element \(f\in\textrm{C}(X)\) can be approximated by elements of A, if for every \(\epsilon>0\), there exists \(p\in A\) such that \(|f(x)-p(x)|<\epsilon\) for every \(x\in X\). The following classical results guarantees when every \(f\in\textrm{C}(X)\) can be approximated by elements of A.

Note

Stone-Weierstrass:

Every element of \(\textrm{C}(X)\) can be approximated by elements of A if and only if for every \(x\neq y\in X\), there exists \(p\in A\) such that \(p(x)\neq p(y)\).

Despite the strong and important implications of the Stone-Weierstrass theorem, it leaves every computational details out and does not give an specific algorithm to produce an estimator for f with elements of A, given an error tolerance \(\epsilon\), and the search for a such begins.

Define \(\|f\|_{\infty}\) (the \(\sup\) norm of f) of a given function \(f\in\textrm{C}(X)\) by

\[\|f\|_{\infty} = \sup_{x\in X}|f(x)|,\]

Then the above argument can be read as: For every \(f\in\textrm{C}(X)\) and every \(\epsilon>0\), there exists \(p\in A\) such that \(\|f-p\|_{\infty}<\epsilon\).

Let \((V, \langle\cdot,\cdot\rangle)\) be an inner product space with \(\|v\|_2=\langle v,v\rangle^{\frac{1}{2}}\). A basis \(\{v_{\alpha}\}_{\alpha\in I}\) is called an orthonormal basis for V if \(\langle v_{\alpha},v_{\beta}\rangle=\delta_{\alpha\beta}\), where \(\delta_{\alpha\beta}=1\) if and only if \(\alpha=\beta\) and is equal to 0 otherwise. Every given set of linearly independent vectors can be turned into a set of orthonormal vectors that spans the same sub vector space as the original. The following well-known result gives an algorithm for producing such orthonormal from a set of linearly independent vectors:

Note

Gram–Schmidt

Let \((V,\langle\cdot,\cdot\rangle)\) be an inner product space. Suppose \(\{v_{i}\}^{n}_{i=1}\) is a set of linearly independent vectors in V. Let

\[u_{1}:=\frac{v_{1}}{\|v_{1}\|_2}\]

and (inductively) let

\[w_{k}:=v_{k}-\sum_{i=1}^{k-1}\langle v_{k},u_{i}\rangle u_{i}\textrm{ and } u_{k}:=\frac{w_{k}}{\|w_{k}\|_2}.\]

Then \(\{u_{i}\}_{i=1}^{n}\) is an orthonormal collection, and for each k,

\[span\{u_{1},u_{2},\cdots,u_{k}\}=span\{v_{1},v_{2},\cdots,v_{k}\}.\]

Note that in the above note, we can even assume that \(n=\infty\).

Let \(B=\{v_1, v_2, \dots\}\) be an ordered basis for \((V,\langle\cdot,\cdot\rangle)\). For any given vector \(w\in V\) and any initial segment of B, say \(B_n=\{v_1,\dots,v_n\}\), there exists a unique \(v\in\textrm{span}(B_n)\) such that \(\|w-v\|_2\) is the minimum:

Note

Let \(w\in V\) and B a finite orthonormal set of vectors (not necessarily a basis). Then for \(v=\sum_{u\in B}\langle u,w\rangle u\)

\[\|w-v\|_2 = \min_{z\in\textrm{span}(B)}\|w-z\|_2.\]

Now, let \(\mu\) be a finite measure on X and for \(f,g\in\textrm{C}(X)\) define \(\langle f,g\rangle=\int_Xf g d\mu\). This defines an inner product on the space of functions. The norm induced by the inner product is denoted by \(\|\cdot\|_{2}\). It is evident that

\[\|f\|_{2}\leq\|f\|_{\infty}\mu(X),~\forall f\in\textrm{C}(X),\]

which implies that any good approximation in \(\|\cdot\|_{\infty}\) gives a good \(\|\cdot\|_{2}\)-approximation. But generally, our interest is the other way around. Employing Gram-Schmidt procedure, we can find \(\|\cdot\|_{2}\) within any desired accuracy, but this does not guarantee a good \(\|\cdot\|_{\infty}\)-approximation. The situation is favorable in finite dimensional case. Take \(B=\{p_1,\dots,p_n\}\subset\textrm{C}(X)\) and \(f\in\textrm{C}(X)\), then there exists \(K_f>0\) such that for every \(g\in\textrm{span}(B\cup\{f\})\),

\[K_f\|g\|_{\infty}\leq\|g\|_{2\leq}\|g\|_{\infty}\mu(X).\]

Since X is assumed to be compact, \(\textrm{C}(X)\) is separable, i.e., \(\textrm{C}(X)\) admits a countable dimensional dense subvector space (e.g. polynomials for when X is a closed, bounded interval). Thus for every \(f\in\textrm{C}(X)\) and every \(\epsilon>0\) one can find a big enough finite B, such that the above inequality holds. In other words, good enough \(\|\cdot\|_{2}\)-approximations of f give good \(\|\cdot\|_{\infty}\)-approximations, as desired.

OrthSystem

Given a measure space, the OrthSystem class implements the described procedure, symbolically. Therefore, it relies on a symbolic environment. Currently, three such environments are acceptable:

  1. sympy
  2. sage
  3. symengine

Legendre polynomials

Let \(d\mu(x) = dx\), the regular Lebesgue measure on \([-1, 1]\) and \(B=\{1, x, x^2, \dots, x^n\}\). The orthonormal polynomials constructed from B are called Legendre polynomials. The \(n^{th}\) Legendre polynomial is denoted by \(P_n(x)\).

The following code generates Legendre polynomials up to a given order:

# the symbolic package
from sympy import *
from pyProximation import Measure, OrthSystem
# the symbolic variable
x = Symbol('x')
# set a limit to the order
n = 6
# define the measure
D = [(-1, 1)]
M = Measure(D, 1)
S = OrthSystem([x], D, 'sympy')
# link the measure to S
S.SetMeasure(M)
# set B = {1, x, x^2, ..., x^n}
B = S.PolyBasis(n)
# link B to S
S.Basis(B)
# generate the orthonormal basis
S.FormBasis()
# print the result
print B.OrthBase

Chebyshev polynomials

Let \(d\mu(x)=\frac{dx}{\sqrt{1-x^2}}\) on \([-1, 1]\) and B as in Legendre polynomials. The orthonormal polynomials associated to this setting are called Chebyshev polynomials and the \(n^{th}\) one is denoted by \(T_n(x)\).

The following code generates Chebyshev polynomials up to a given order:

# the symbolic package
from sympy import *
from numpy import sqrt
from pyProximation import Measure, OrthSystem
# the symbolic variable
x = Symbol('x')
# set a limit to the order
n = 6
# define the measure
D = [(-1, 1)]
w = lambda x: 1./sqrt(1. - x**2)
M = Measure(D, w)
S = OrthSystem([x], D, 'sympy')
# link the measure to S
S.SetMeasure(M)
# set B = {1, x, x^2, ..., x^n}
B = S.PolyBasis(n)
# link B to S
S.Basis(B)
# generate the orthonormal basis
S.FormBasis()
# print the result
print S.OrthBase

Approximation

Let \((X, \mu)\) be a compact Borel measure space and \(\mathcal{O}=\{u_1, u_2,\dots\}\) an orthonormal basis of function whose span is dense in \(L^2(X, \mu)\). Given a function \(f\in L^2(X, \mu)\), then f can be approximated as

\[f = \lim\limits_{n\rightarrow\infty}\sum_{i=1}^n\langle f, u_i\rangle u_i\]

OrthSystem.Series calculates the coefficients \(\langle f, u_i\rangle\):

Truncated Fourier series

Let \(d\mu(x) = dx\), the regular Lebesgue measure on \([c, c + 2l]\) and \(B=\{1, \sin(\pi x), \cos(\pi x), \sin(2\pi x), \cos(2\pi x), \dots, \sin(n\pi x), \cos(n\pi x),\}\). The following code calculates the Fourier series approximation of \(f(x)=\sin(x)e^x\):

from sympy import *
from numpy import sqrt
from pyProximation import Measure, OrthSystem
# the symbolic variable
x = Symbol('x')
# set a limit to the order
n = 4
# define the measure
D = [(-1, 1)]
w = lambda x: 1./sqrt(1. - x**2)
M = Measure(D, w)
S = OrthSystem([x], D, 'sympy')
# link the measure to S
S.SetMeasure(M)
# set B = {1, x, x^2, ..., x^n}
B = S.FourierBasis(n)
# link B to S
S.Basis(B)
# generate the orthonormal basis
S.FormBasis()
# number of elements in the basis
m = len(S.OrthBase)
# set f(x) = sin(x)e^x
f = sin(x)*exp(x)
# extract the coefficients
Coeffs = S.Series(f)
# form the approximation
f_app = sum([S.OrthBase[i]*Coeffs[i] for i in range(m)])
print f_app

Fitting Data

Suppose that a set of data points \((x_1,y_1),\dots,(x_m,y_m)\) are given, where \(x_i\in\mathbb{R}^n\). To fit a curve or surface to data with minimum average square error, one can employ OrthSystem with an arbitrary system of functions to do so. The following example generates random data and fits two surfaces using polynomials and trigonometric functions:

from random import uniform
from sympy import *
from sympy.utilities.lambdify import lambdify, implemented_function
# import the module
from pyProximation import Measure, OrthSystem, Graphics
x, y, z = symbols('x, y, z')
# define dirac function symbolically and numerically
dirac = implemented_function(
    Function('dirac'), lambda a, b: 1 * (abs(a - b) < .000001))
# list of random points
points = [(uniform(0, 1), uniform(0, 1)) for _ in range(50)]
# container for the support of discrete measure
D = {}
# a symbolic function to represent the random points
g = 0
# Random values of the function
vals = []
for p in points:
    t = uniform(-.8, 1)
    vals.append(t)
    g += t * dirac(x, p[0]) * dirac(y, p[1])
    D[p] = 1
# The discrete measure supported at random points
M = Measure(D)
# Orthonormal systems of functions
S1 = OrthSystem([x, y], [(0, 1), (0, 1)])
S1.SetMeasure(M)

S2 = OrthSystem([x, y], [(0, 1), (0, 1)])
S2.SetMeasure(M)
# polynomial basis
B1 = S1.PolyBasis(4)
# trigonometric basis
B2 = S2.FourierBasis(3)
# link the basis to the orthogonal systems
S1.Basis(B1)
S2.Basis(B2)
# form the orthonormal basis
S1.FormBasis()
S2.FormBasis()
# calculate coefficients
cfs1 = S1.Series(g)
cfs2 = S2.Series(g)
# orthogonal approximation
aprx1 = sum([S1.OrthBase[i] * cfs1[i] for i in range(len(S1.OrthBase))])
aprx2 = sum([S2.OrthBase[i] * cfs2[i] for i in range(len(S2.OrthBase))])
# the graphic object
G1 = Graphics('sympy')
G2 = Graphics('sympy')
G1.Plot3D(aprx1, (x, 0, 1), (y, 0, 1))
G2.Plot3D(aprx2, (x, 0, 1), (y, 0, 1))
P = []
idx = 0
for p in points:
    q = (p[0], p[1], vals[idx])
    P.append(q)
    idx += 1
# link the points to the object
G1.Point(P, color='red')
G2.Point(P, color='red')
# save to file
G1.save('poly.png')
G2.save('trig.png')
_images/poly.png _images/trig.png

Rational Approximation

Let \(f:X\rightarrow\mathbb{R}\) be a continuous function and \(\{b_0, b_1,\dots\}\) an orthonormal system of functions over \(X\) with respect to a measure \(\mu\). Suppose that we want to approximate \(f\) with a function of the form \(\frac{p(x)}{q(x)}\) where \(p=\sum_{j=0}^m\alpha_jb_j\) and \(q=1+\sum_{j=1}^n\beta_jb_j\). Let \(r(x)=p(x)-q(x)f(x)\) be the residual. We can find the coefficients \(\alpha_i, \beta_j\) such that \(\|r\|_2\) be minimum.

Let \(L(\alpha, \beta)=r(x)\cdot r(x)\). Then the solution satisfies the following equations:

\[\begin{split}\frac{\partial}{\partial \alpha_i}L = 0,\\ \frac{\partial}{\partial \beta_i}L = 0,\end{split}\]

which is a system of linear equations. This is implemented in rational.RationalAprox.RatLSQ:

from sympy import *
from pyProximation import *

x = Symbol('x')
f = sinh(x) * cos(3 * x) + sin(3 * x) * exp(x)
D = [(-pi, pi)]
S = OrthSystem([x], D)
B = S.PolyBasis(5)

# link B to S
S.Basis(B)

# generate the orthonormal basis
S.FormBasis()

# extract the coefficients of approximations
Coeffs = S.Series(f)

# form the approximation
h = sum([S.OrthBase[i] * Coeffs[i] for i in range(len(S.OrthBase))])
T = RationalAprox(S)
g = T.RatLSQ(5, 5, f)

G = Graphics('sympy', 100)
G.Plot2D(f, (x, -3.2, 3.2), legend='exact')
G.Plot2D(g, (x, -3.2, 3.2), color='green', legend='rational')
G.Plot2D(h, (x, -3.2, 3.2), color='red', legend='Legendre')
G.save('Exm05.png')
_images/rat.png

Interpolation

Lagrange interpolation

Suppose that a list of \(m+1\) points \(\{(x_0, y_0),\dots,(x_{m}, y_{m})\}\) in \(\mathbb{R}^2\) is given, such that \(x_i\neq x_j\), if \(i\neq j\). Then

\[p({\bf x}) = \sum_{i=0}^m y_i\ell_i({\bf x}),\]

is a polynomial of degree m which passes through all the given points. Here

\[\ell_i({\bf x}) = \prod_{j\neq i}\frac{{\bf x}-x_j}{x_i-x_j},\]

are Lagrange Basis Polynomials.

This procedure can be extended to multivariate case as well.

Suppose that a list of points \(\{{x}_1,\dots,{x}_{\rho}\}\) in \(\mathbb{R}^n\) and a list of corresponding values \(\{y_1,\dots,y_\rho\}\) are given. Let’s denote by \({\bf X}\) the tuple \(({\bf X}_1,\dots,{\bf X}_n)\) of variables and \({\bf X}_1^{e_1}\cdots{\bf X}_n^{e_n}\) by \({\bf X}^{\bf e}\), where \({\bf e}=(e_1,\dots,e_n)\).

If for some \(m>0\), we have \(\rho={{m+n}\choose{m}}\), then the number of given points matches the number of monomials of degree at most m in the polynomial basis. Denote the exponents of these monomials by \({\bf e}_i\), \(i=1,\dots,\rho\) and let

\[\begin{split}D=\left(\begin{array}{ccc} x_1^{{\bf e}_1} & \dots & x_1^{{\bf e}_{\rho}}\\ \vdots & & \vdots \\ x_{\rho}^{{\bf e}_1} & \dots & x_{\rho}^{{\bf e}_{\rho}}\\ \end{array}\right)\end{split}\]

and for \(1\leq j\leq\rho\):

\[\begin{split}D_j=\left(\begin{array}{ccc} x_1^{{\bf e}_1} & \dots & x_1^{{\bf e}_{\rho}}\\ \vdots & \vdots & \vdots \\ x_{j-1}^{{\bf e}_1} & \dots & x_{j-1}^{{\bf e}_{\rho}}\\ {\bf X}^{{\bf e}_1} & \dots & {\bf X}^{{\bf e}_{\rho}}\\ x_{j+1}^{{\bf e}_1} & \dots & x_{j+1}^{{\bf e}_{\rho}}\\ \vdots & \vdots & \vdots \\ x_{\rho}^{{\bf e}_1} & \dots & x_{\rho}^{{\bf e}_{\rho}}\\ \end{array}\right).\end{split}\]

Then the polynomial

\[p({\bf X}) = \sum_{i=1}^{\rho} y_i\frac{|D_i|}{|D|},\]

interpolates the given list of points and their corresponding values. Here \(|M|\) denotes the determinant of M.

The above procedure is implemented in the Interpolation module. The following code provides an example in 2 dimensional case:

from sympy import *
from pyProximation import Interpolation
# define symbolic variables
x = Symbol('x')
y = Symbol('y')
# initiate the interpolation instance
Inter = Interpolation([x, y], 'sympy')
# list of points
points = [(-1, 1), (-1, 0), (0, 0), (0, 1), (1, 0), (1, -1)]
# corresponding values
values = [-1, 2, 0, 2, 1, 0]
# interpolate
p = Inter.Interpolate(points, values)
# print the result
print p
G = Graphics('sympy')
points3d = [(-1, 1, -1), (-1, 0, 2), (0, 0, 0), (0, 1, 2), (1, 0, 1), (1, -1, 0)]
G.Point(points3d, color='red')
G.Plot3D(p, (x, -1.1, 1.1), (y, -1.1, 1.1))

This will be the result:

_images/InterpolationExm.png

\(L^2\)-approximation with discrete measures

Suppose that \(\mu=\sum_1^n\delta_{x_i}\) is a measure with n-points in its support. Then the orthogonal system of polynomials consists of at most n+1 polynomials. Approximation with these n+1 polynomials is essentially same as interpolation:

# symbolic variable
x = Symbol('x')
# function to be approximated
g = sin(x)*exp(sin(x)*x)#x*sin(x)
# its numerical equivalent
g_ = lambdify(x, g, 'numpy')
# number of approximation terms
n = 6
# half interval length
l = 3.1
# interpolation points and values
Xs = [[-3], [-2], [-1], [0], [1], [2], [3]]
Ys = [g_(Xs[i][0]) for i in range(7)]
# a discrete measure
supp = {-3:1, -2:1, -1:1, 0:1, 1:1, 2:1, 3:1}
M = Measure(supp)
# orthogonal system
S = OrthSystem([x], [(-l, l)])
# link the measure
S.SetMeasure(M)
# polynomial basis
B = S.PolyBasis(n)
# link the basis to the orthogonal system
S.Basis(B)
# form the orthonormal basis
S.FormBasis()
# calculate coefficients
cfs = S.Series(g)
# orthogonal approximation
aprx = sum([S.OrthBase[i]*cfs[i] for i in range(len(B))])
# interpolate
Intrp = Interpolation([x])
intr = Intrp.Interpolate(Xs, Ys)
# plot the results
G = Graphics('sympy', numpoints=100)
G.SetTitle("$n = %d$"%(n))
G.Plot2D(g, (x, -l, l), color='blue', legend='Original')
G.Plot2D(aprx, (x, -l, l), color='red', legend='Orthogonal', thickness=2)
G.Plot2D(intr, (x, -l, l), color='green', legend='Interpolant')
G.save('OrthIntrp.png')
_images/OrthIntrp.png

Collocation

Integro-differential equations

A system of integro-differential equations is a system of equations that involve both integrals and derivatives of a function. The general first-order, linear (only with respect to the term involving derivative) integro-differential equation is of the form

\[\frac{d}{dx}u(x) + \int_{x_0}^{x} f(t, u(t)) dt = g(x, u(x)),\quad u(x_0)=u_0.\]

Collocation method

In collocation method, one assumes that the solution of the equation is of a certain form and tries to exploit information about the assumed solution. More specifically, assume that \(u_1, u_2, \dots, u_n\) are orthonormal functions and the solution is of the form \(f=\sum_i a_i u_i\) where \(a_i\) are unknowns. Plug f in to the equation and choose n different admissible points. plug those points in to the resulting equation to eliminate x. Then we obtain n equations in terms of \(a_1,\dots, a_n\). Solving the resulted system of algebraic equations gives an approximation for f.

Collocation class

Single equation

The Collocation class implements the above described method. The following example solves the equation \(\frac{dy}{dx} + 2y + 5 \int y dx = 1\), \(y(0)=y(2\pi)=0\) where the exact solution is \(\frac{1}{2}\sin(2x)e^{-x}\):

from sympy import *
from pyProximation import *
# symbolic variable
x = Symbol('x')
# symbolic function
y = Function('y')(x)
# number of approximation terms
n = 10
# orthogonal system
S = OrthSystem([x], [(0, 2*pi)])
# polynomial basis
B = S.PolyBasis(n)
# link the basis to the orthogonal system
S.Basis(B)
# form the orthonormal basis
S.FormBasis()
# form the equation
EQ = Eq(diff(y, x) + 2*y +5*integrate(y, x), 1)
# initiate collocation object with x as variable and y as function
C = Collocation([x], [y])
# link the orthogonal system to the collocation object and the unknown function `y`
C.SetOrthSys(S, y)
# link the equation to the collocation object
C.Equation([EQ])
# initial and boundary conditions
C.Condition(Eq(y, 0), [0])
C.Condition(Eq(y, 0), [2*pi])
# set the solver to 'scipy'
C.setSolver('scipy')
# solve to collocation system and print the solution
Apprx = C.Solve()
print Apprx[0]

In the above example the Collocation class selects collocation points itself according to the measure chosen for orthogonal system, in this case, the usual Lebesgue measure. Thus, the class samples enough number of points uniformly from the domain. The solution and the exact answer are depicted below:

_images/CollExm01Smpl.png

One can provide prefered collocation points to the solver. The following repeats the previous example where collocation points are selected in a way from the domain within a fixed distance from each other:

from sympy import *
from pyProximation import *
# symbolic variable
x = Symbol('x')
# symbolic function
y = Function('y')(x)
# number of approximation terms
n = 10
# orthogonal system
S = OrthSystem([x], [(0, 2*pi)])
# polynomial basis
B = S.PolyBasis(n)
# link the basis to the orthogonal system
S.Basis(B)
# form the orthonormal basis
S.FormBasis()
# form the equation
EQ = Eq(diff(y, x) + 2*y +5*integrate(y, x), 1)
# initiate collocation object with x as variable and y as function
C = Collocation([x], [y])
# link the orthogonal system to the collocation object and the unknown function `y`
C.SetOrthSys(S, y)
# link the equation to the collocation object
C.Equation([EQ])
# initial and boundary conditions
C.Condition(Eq(y, 0), [0])
C.Condition(Eq(y, 0), [2*pi])
# a list of collocation points
pnts = [[i*2*pi/n] for i in range(1,n)]
# link the collocation point to the object
C.CollPoints(pnts)
# set the solver to 'scipy'
C.setSolver('scipy')
# solve to collocation system and print the solution
Apprx = C.Solve()
print Apprx[0]

The result shows slight improvement in the solution:

_images/CollExm01NoSmpl.png

Note

Each point of collocation must be given as a list or tuple.

System of equations

The Collocation class is also able to handle systems of equations. Consider the following system of partial differential equations:

\[\begin{split}\left\lbrace \begin{array}{lcl} \frac{\partial x}{\partial t} + x + 4y & = & 10\\ x - \frac{\partial y}{\partial t} - y & = & 0, \end{array}\right.\end{split}\]

with initial conditions \(x(0) = 4\) and \(y(0) = 3\). The exact solution to the above system is:

\[\begin{split}\begin{array}{lcl} x(t) & = & 2(1 + e^{-t}\cos(2t) - e^{-t}\sin(2t))\\ y(t) & = & 2 + e^{-t}\cos(2t) + e^{-t}\sin(2t) \end{array}\end{split}\]

The following code solves the system and plots the exact and approximate solutions:

from sympy import *
from pyProximation import *
# symbolic variable
t = Symbol('t')
# symbolic function
x = Function('x')(t)
y = Function('y')(t)
# number of approximation terms
n = 8
# orthogonal system
S = OrthSystem([t], [(0, pi)])
# polynomial basis
B = S.PolyBasis(n)
# link the basis to the orthogonal system
S.Basis(B)
# form the orthonormal basis
S.FormBasis()
# form the equation
EQ1 = Eq(diff(x, t) + x + 4*y, 10)
EQ2 = Eq(x - diff(y, t) - y, 0)
# initiate collocation object with x as variable and y as function
C = Collocation([t], [x, y])
# link the orthogonal system to the collocation object and unknown functions
C.SetOrthSys(S, x)
C.SetOrthSys(S, y)
# link the equation to the collocation object
C.Equation([EQ1, EQ2])
# initial and boundary conditions
C.Condition(Eq(x, 4), [0])
C.Condition(Eq(y, 3), [0])
# set the solver to 'scipy'
C.setSolver('scipy')
# solve to collocation system and print the solution
Apprx = C.Solve()
# print the answers
print Apprx[0]
print Apprx[1]
# the exact solution
f = [2*(1 + exp(-t)*cos(2*t) - exp(-t)*sin(2*t)), 2 + exp(-t)*cos(2*t) + exp(-t)*sin(2*t)]
# plot the exact and approximate solutions
G = Graphics('sympy', numpoints=200)
G.SetTitle("$n = %d$"%(n))
G.ParamPlot2D(f, (t, 0, pi), color='blue', legend='Exact')
G.ParamPlot2D(Apprx, (t, 0, pi), color='red', legend='Approximation')
G.save('Plot-%d.png'%(n))

The followings are results for \(n=7\) and \(n=8\):

_images/CollExmSys7.png _images/CollExmSys8.png

Distributing over subregions

Sometimes the system of equations is too complicated and it might be very costly to produce a reasonably good estimation by raising the size of the function basis. A possible solution is to divide the domain into various smaller subregions and try to solve the system for those smaller subregions. Use the partial solutions to force appropriate boundary condition to the remaining subregions in order to get consistent solutions on all subregions. This task is done by use of the SubRegion class.

Note

Currently, this only works with symbolic package sympy.

Graphics

This small module provides simple functionality to draw 2D and 3D plots. Since the functions that appear in this package are of three different types, it must be declared to the Graphic class, what type of function it is dealing with. The three different types are:

  1. numeric, the default type,
  2. sympy, the symbolic sympy functions,
  3. sage, the symbolic sage functions

The following plots a list of random points in 3D:

from random import uniform
# import the module
from pyProximation import Graphics
# list of random points
points = [(uniform(-10, 10), uniform(-10, 10), uniform(-10, 10)) for _ in range(100)]
# the graphic object
G = Graphics()
# link the points to the object
G.Point(points, color='red')
# save to file
G.save('points.png')
_images/points.png

Two Dimensional Plots

The method Plot2D is to plot curves in 2 dimensions. The following code shows an example:

# chOose the symbolic environment
Symbolic = 'sympy'
# define the symbolic variables accordingly
if Symbolic == 'sympy':
        from sympy import *
        x = Symbol('x')
        y = Function('y')(x)
elif Symbolic == 'sage':
        from sage.all import *
        x = var('x')
        y = function('y')(x)
# import the modules
from pyProximation import *
# size of basis
n = 6
# set the measure
M = Measure([(-1, 1)], lambda x:1./sqrt(1.-x**2))
# Orthogonal system of functions
S = OrthSystem([x], [(-1, 1)], Symbolic)
# link the measure to the orthogonal system
S.SetMeasure(M)
# monomial basis
B = S.PolyBasis(n)
# Fourier basis
#B = S.FourierBasis(n)
# link the basis
S.Basis(B)
# form the orthogonal system
S.FormBasis()
# the exact solution
Z = sin(pi*x)*exp(x)
R = diff(Z, x, x) - diff(Z, x)
# the pde
if Symbolic == 'sympy':
        EQ1 = (Eq(diff(y, x, x) - diff(y, x), R))
elif Symbolic == 'sage':
        EQ1 = (diff(y, x, x) - diff(y, x) == R)
# corresponding coefficients
series = S.Series(Z)
# orthogonal approximation
ChAprx = sum([S.OrthBase[i]*series[i] for i in range(m)])
# set up the collocation class
C = Collocation([x], [y], Symbolic)
# link to the orthogonal system
C.SetOrthSys(S)
# link the equation
C.Equation([EQ1])
# initial conditions
if Symbolic == 'sympy':
        C.Condition(Eq(y, 0), [0])
        C.Condition(Eq(y, sin(-pi)*exp(-1)), [-1])
        C.Condition(Eq(y, sin(pi)*exp(1)), [1])
elif Symbolic == 'sage':
        C.Condition(y == 0, [0])
        C.Condition(y == sin(-pi)*exp(-1), [-1])
        C.Condition(y == sin(pi)*exp(1), [1])
# collocation points
m = len(S.OrthBase)
pnts = [[-1 + i*2./m] for i in range(m)]
# link the collocation points
C.CollPoints(pnts)
# set solver
C.setSolver('scipy')
# solve
Apprx = C.Solve()
print Apprx[0]
# plot the results
G = Graphics(Symbolic)
G.Plot2D(Z, (x, -1, 1), color='blue', legend='Exact')
G.Plot2D(ChAprx, (x, -1, 1), color='green', legend='Orth Apprx')
G.Plot2D(Apprx[0], (x, -1, 1), color='red', legend='Colloc Apprx')
G.save('PlotsPoly.png'%(n))
#G.save('PlotsFourier.png'%(n))
_images/PlotsPoly.png _images/PlotsFourier.png

For an example of parametric plots see this.

Three Dimensional Plots

To generate static 3 dimensional plots, Graphics implement Plot3D. The following example illustrates the usage of this method:

# select the symbolic tool
Symbolic = 'sympy'
if Symbolic == 'sympy':
        from sympy import *
        x = Symbol('x')
        t = Symbol('t')
        y = Function('y')(t, x)
elif Symbolic == 'sage':
        from sage.all import *
        t = var('t')
        x = var('x')
        y = function('y')(t, x)
# import the modules
from pyProximation import *
# degree of basis
n = 4
# orthogonal system
S = OrthSystem([t, x], [(0, 1), (0, 1)])
# monomial basis
B = S.PolyBasis(n)
# link the basis
S.Basis(B)
# form the orthonormal basis
S.FormBasis()
# construct a pde
Z = t*sin(pi*x)
R = diff(Z, t) - diff(Z, x)
if Symbolic == 'sympy':
        EQ1 = Eq(diff(y, t) - diff(y, x), R)
elif Symbolic == 'sage':
        EQ1 = diff(y, t) - diff(y, x) == R
# collocation object
C = Collocation([t, x], [y])
# link the orthogonal system
C.SetOrthSys(S)
# link the equation
C.Equation([EQ1])
# some initial & boundary conditions
if Symbolic == 'sympy':
        C.Condition(Eq(y, 0), [0, 0])
        C.Condition(Eq(y, 0), [0, .3])
        C.Condition(Eq(y, 0), [1, 1])
        C.Condition(Eq(y, 1), [1, .5])
        C.Condition(Eq(y, 0), [0, .7])
elif Symbolic == 'sage':
        C.Condition(y == 0, [0, 0])
        C.Condition(y == 0, [0, .3])
        C.Condition(y == 0, [1, 1])
        C.Condition(y == 1, [1, .5])
        C.Condition(y == 0, [0, .7])
# set the solver
C.setSolver('scipy')
# solve the collocation system
Apprx = C.Solve()
print Apprx[0]
# plot the result
G = Graphics(Symbolic)
G.SetLabelX("$t$")
G.SetLabelY("$x$")
G.SetLabelZ("$y(t, x)$")
G.Plot3D(Apprx[0], (t, 0, 1), (x, 0, 1))
# save the image
G.save('PDEplot.png'%(n))
# open the interactive window
G.interact()
_images/PDEplot.png

A second method called interact is invoked at the end which called the mayavi library to show an interactive view of the surface.

_images/snapshot.png

Code Documentation

class base.Foundation[source]

This class contains common features of all modules.

DetSymEnv()[source]

Returns a list. The list consists of all symbolic tools present among ‘sympy’,’sage’ and ‘symengine’.

class interpolation.Interpolation(var, env='sympy')[source]

The Interpolation class provides polynomial interpolation routines in multi variate case.

var is the list of symbolic variables and env is the the symbolic tool.

Delta(idx=-1)[source]

Construct the matrix corresponding to idx’th point, if idx>0 Otherwise returns the discriminant.

Interpolate(points, vals)[source]

Takes a list of points points and corresponding list of values vals and return the interpolant.

Since in multivariate case, there is a constraint on the number of points, it checks for the valididty of the input. In case of failure, describes the type of error occured according to the inputs.

MinNumPoints()[source]

Returns the minimum number of points still required.

Monomials()[source]

Generates the minimal set of monomials for interpolation.

class graphics.Graphics(env='numeric', numpoints=50)[source]

This class tends to provide basic graphic tools based on matplotlib and mayavi.

Accepts one optional argument env which determines the types of the function to be visualized:

  • numeric: is a numerical function (regular python functions)
  • sympy: Sympy symbolic function
  • sage: Sage symbolic function
ParamPlot2D(funcs, rng, color='blue', legend='', thickness=1)[source]

Appends a parametric curve to the Graphics object. The parameters are as follows:

  • funcs: the tupleof functions to be plotted,
  • rng: a triple of the form (t, a, b), where t is the funcs’s independents variable, over the range [a, b],
  • color: the color of the current curve,
  • legend: the text for the legend of the current crve.
Plot2D(func, xrng, color='blue', legend='', thickness=1)[source]

Appends a curve to the Graphics object. The parameters are as follows:

  • func: the function to be plotted,
  • xrng: a triple of the form (x, a, b), where x is the func’s independents variable, over the range [a, b],
  • color: the color of the current curve,
  • legend: the text for the legend of the current crve.
Plot3D(func, xrng, yrng)[source]

Sets a surface to the Graphics object. The parameters are as follows:

  • func: the function to be plotted,
  • xrng: a triple of the form (x, a, b), where x is the first func`s independents variable, over the range `[a, b],
  • yrng: a triple of the form (y, c, d), where x is the second func’s independents variable, over the range [c, d].
Point(pnts, color='blue', marker='o', legend='')[source]

Adds a list of points to the plot.

SetLabelX(lbl)[source]

Sets the label for X axis

SetLabelY(lbl)[source]

Sets the label for Y axis

SetLabelZ(lbl)[source]

Sets the label for Z axis

SetTitle(ttl)[source]

Sets the title of the graph.

interact()[source]

Shows an interavtive demo of the 3D surface, using mayavi, so it requires mayavi for python to be installed.

save(fname='fig.png')[source]

Saves the outpu of the Graphics object to the file fname.

Indices and tables