73
DeepXDE Documentation Release 0.13.6 Lu Lu Oct 13, 2021

DeepXDE Documentation

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: DeepXDE Documentation

DeepXDE DocumentationRelease 0.13.6

Lu Lu

Oct 13, 2021

Page 2: DeepXDE Documentation
Page 3: DeepXDE Documentation

Contents

1 Features 3

2 User guide 52.1 Install and Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Demos of Forward Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Demos of Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.4 Demos of Function Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.5 FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.6 Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.7 Cite DeepXDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.8 The Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 API reference 193.1 deepxde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 deepxde.data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.3 deepxde.geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.4 deepxde.icbcs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.5 deepxde.nn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.6 deepxde.nn.tensorflow_compat_v1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.7 deepxde.nn.tensorflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.8 deepxde.nn.pytorch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.9 deepxde.optimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.10 deepxde.utils . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4 Indices and tables 57

Python Module Index 59

Index 61

i

Page 4: DeepXDE Documentation

ii

Page 5: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

DeepXDE is a library for scientific machine learning. Use DeepXDE if you need a deep learning library that

• solves forward and inverse partial differential equations (PDEs) via physics-informed neural network (PINN),

• solves forward and inverse integro-differential equations (IDEs) via PINN,

• solves forward and inverse fractional partial differential equations (fPDEs) via fractional PINN (fPINN),

• approximates nonlinear operators via deep operator network (DeepONet),

• approximates functions from multi-fidelity data via multi-fidelity NN (MFNN),

• approximates functions from a dataset with/without constraints.

DeepXDE supports three tensor libraries as backends: TensorFlow 1.x (tensorflow.compat.v1 in TensorFlow 2.x),TensorFlow 2.x, and PyTorch.

Documentation: ReadTheDocs, SIAM Rev., Slides, Video

Papers on algorithms

• Solving PDEs and IDEs via PINN: SIAM Rev.

• Solving fPDEs via fPINN: SIAM J. Sci. Comput.

• Solving stochastic PDEs via NN-arbitrary polynomial chaos (NN-aPC): J. Comput. Phys.

• Solving inverse design/topology optimization via hPINN: arXiv

• Learning nonlinear operators via DeepONet: Nat. Mach. Intell., J. Comput. Phys., J. Comput. Phys.

• Learning from multi-fidelity data via MFNN: J. Comput. Phys., PNAS

Contents 1

Page 6: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

2 Contents

Page 7: DeepXDE Documentation

CHAPTER 1

Features

DeepXDE has implemented many algorithms as shown above and supports many features:

• complex domain geometries without tyranny mesh generation. The primitive geometries are interval, triangle,rectangle, polygon, disk, cuboid, and sphere. Other geometries can be constructed as constructive solid geometry(CSG) using three boolean operations: union, difference, and intersection.

• multi-physics, i.e., (time-dependent) coupled PDEs.

• 5 types of boundary conditions (BCs): Dirichlet, Neumann, Robin, periodic, and a general BC, which can bedefined on an arbitrary domain or on a point set.

• different neural networks, such as (stacked/unstacked) fully connected neural network, residual neural network,and (spatio-temporal) multi-scale fourier feature networks.

• 6 sampling methods: uniform, pseudorandom, Latin hypercube sampling, Halton sequence, Hammersley se-quence, and Sobol sequence. The training points can keep the same during training or be resampled everycertain iterations.

• conveniently save the model during training, and load a trained model.

• uncertainty quantification using dropout.

• many different (weighted) losses, optimizers, learning rate schedules, metrics, etc.

• callbacks to monitor the internal states and statistics of the model during training, such as early stopping.

• enables the user code to be compact, resembling closely the mathematical formulation.

All the components of DeepXDE are loosely coupled, and thus DeepXDE is well-structured and highly configurable.It is easy to customize DeepXDE to meet new demands.

3

Page 8: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

4 Chapter 1. Features

Page 9: DeepXDE Documentation

CHAPTER 2

User guide

2.1 Install and Setup

2.1.1 Installation

DeepXDE requires one of the following backend-specific dependencies to be installed:

• TensorFlow 1.x: TensorFlow>=2.2.0

• TensorFlow 2.x: TensorFlow>=2.2.0 and TensorFlow Probability

• PyTorch: PyTorch

Then, you can install DeepXDE itself.

• Install the stable version with pip:

$ pip install deepxde

• Install the stable version with conda:

$ conda install -c conda-forge deepxde

• For developers, you should clone the folder to your local machine and put it along with your project scripts:

$ git clone https://github.com/lululxvi/deepxde.git

• Other dependencies

– Matplotlib

– NumPy

– scikit-learn

– scikit-optimize

– SciPy

5

Page 10: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

2.1.2 Working with different backends

DeepXDE supports TensorFlow 1.x (tensorflow.compat.v1 in TensorFlow 2.x), TensorFlow 2.x, and PyTorchbackends. DeepXDE will choose the backend on the following options (high priority to low priority)

• Use the DDEBACKEND environment variable:

– You can use DDEBACKEND=BACKEND python pde.py ... to specify the backend

– Or export DDEBACKEND=BACKEND to set the global environment variable

• Modify the config.json file under “~/.deepxde”:

– You can use python -m deepxde.backend.set_default_backend BACKEND to set the de-fault backend

Currently BACKEND can be chosen from “tensorflow.compat.v1” (TensorFlow 1.x backend), “tensorflow” (Tensor-Flow 2.x backend), and “pytorch” (PyTorch). The default backend is TensorFlow 1.x.

We note that

• Different backends support slightly different features, and switch to another backend if DeepXDE raised abackend-related error. Currently, the number of features supported is: TensorFlow 1.x > TensorFlow 2.x >PyTorch. Some features can be implemented easily (basically translating from one framework to another), andwe welcome your contributions.

• Different backends also have different computational speed, and switch to another backend if the speed is anissue in your case.

TensorFlow 1.x backend

Export DDEBACKEND as tensorflow.compat.v1 to specify TensorFlow 1.x backend. The required TensorFlowversion is 2.2.0 or later. Essentially, TensorFlow 1.x backend uses the API tensorflow.compat.v1 in TensorFlow 2.xand disables the eager execution:

import tensorflow.compat.v1 as tftf.disable_eager_execution()

In addition, DeepXDE will set TF_FORCE_GPU_ALLOW_GROWTH to true to prevent TensorFlow take over thewhole GPU memory.

TensorFlow 2.x backend

Export DDEBACKEND as tensorflow to specify TensorFlow 2.x backend. The required TensorFlow version is2.2.0 or later. In addition, DeepXDE will set TF_FORCE_GPU_ALLOW_GROWTH to true to prevent TensorFlowtake over the whole GPU memory.

PyTorch backend

Export DDEBACKEND as pytorch to specify PyTorch backend. In addition, if GPU is available, DeepXDE will setthe default tensor type to cuda, so that all the tensors will be created on GPU as default:

if torch.cuda.is_available():torch.set_default_tensor_type(torch.cuda.FloatTensor)

6 Chapter 2. User guide

Page 11: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

2.2 Demos of Forward Problems

Here are some demos of solving forward problems of PDEs.

2.2.1 ODEs

• ODE system

• Lotka-Volterra equation

2.2.2 Time-independent PDEs

Poisson equation in 1D with Dirichlet boundary conditions

Problem setup

We will solve a Poisson equation:

−∆𝑢 = 𝜋2 sin(𝜋𝑥), 𝑥 ∈ [−1, 1],

with the Dirichlet boundary conditions

𝑢(−1) = 0, 𝑢(1) = 0.

The exact solution is 𝑢(𝑥) = sin(𝜋𝑥).

Implementation

This description goes through the implementation of a solver for the above described Poisson equation step-by-step.

First, the DeepXDE and TensorFlow (tf) modules are imported:

import deepxde as ddefrom deepxde.backend import tf

We begin by defining a computational geometry. We can use a built-in class Interval as follows

geom = dde.geometry.Interval(-1, 1)

Next, we express the PDE residual of the Poisson equation:

def pde(x, y):dy_xx = dde.grad.hessian(y, x)return -dy_xx - np.pi ** 2 * tf.sin(np.pi * x)

The first argument to pde is the network input, i.e., the 𝑥-coordinate. The second argument is the network output, i.e.,the solution 𝑢(𝑥), but here we use y as the name of the variable.

Next, we consider the Dirichlet boundary condition. A simple Python function, returning a boolean, is used to definethe subdomain for the Dirichlet boundary condition ({−1, 1}). The function should return True for those pointsinside the subdomain and False for the points outside. In our case, the points 𝑥 of the Dirichlet boundary conditionare 𝑥 = −1 and 𝑥 = 1. (Note that because of rounding-off errors, it is often wise to use np.isclose to test whethertwo floating point values are equivalent.)

2.2. Demos of Forward Problems 7

Page 12: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

def boundary(x, _):return np.isclose(x[0], -1) or np.isclose(x[0], 1)

The argument x to boundary is the network input and is a 𝑑-dim vector, where 𝑑 is the dimension and 𝑑 = 1 in thiscase. To facilitate the implementation of boundary, a boolean on_boundary is used as the second argument. Ifthe point x (the first argument) is on the entire boundary of the geometry (the left and right endpoints of the interval inthis case), then on_boundary is True, otherwise, on_boundary is False. Thus, we can also define boundaryin a simpler way:

def boundary(x, on_boundary):return on_boundary

Next, we define a function to return the value of 𝑢(𝑥) for the points 𝑥 on the boundary. In this case, it is 𝑢(𝑥) = 0.

def func(x):return 0

If the function value is not a constant, we can also use NumPy to compute. For example, sin(𝜋𝑥) is 0 on the boundary,and thus we can also use

def func(x):return np.sin(np.pi * x)

Then, the Dirichlet boundary condition is

bc = dde.DirichletBC(geom, func, boundary)

Now, we have specified the geometry, PDE residual, and Dirichlet boundary condition. We then define the PDEproblem as

data = dde.data.PDE(geom, pde, bc, 16, 2, solution=func, num_test=100)

The number 16 is the number of training residual points sampled inside the domain, and the number 2 is the numberof training points sampled on the boundary. The argument solution=func is the reference solution to compute theerror of our solution, and can be ignored if we don’t have a reference solution. We use 100 residual points for testingthe PDE residual.

Next, we choose the network. Here, we use a fully connected neural network of depth 4 (i.e., 3 hidden layers) andwidth 50:

layer_size = [1] + [50] * 3 + [1]activation = "tanh"initializer = "Glorot uniform"net = dde.maps.FNN(layer_size, activation, initializer)

Now, we have the PDE problem and the network. We bulid a Model and choose the optimizer and learning rate:

model = dde.Model(data, net)model.compile("adam", lr=0.001, metrics=["l2 relative error"])

We also compute the 𝐿2 relative error as a metric during training. We can also use callbacks to save the modeland the movie during training, which is optional.

checkpointer = dde.callbacks.ModelCheckpoint("model/model.ckpt", verbose=1, save_better_only=True

)

(continues on next page)

8 Chapter 2. User guide

Page 13: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

(continued from previous page)

# ImageMagick (https://imagemagick.org/) is required to generate the movie.movie = dde.callbacks.MovieDumper(

"model/movie", [-1], [1], period=100, save_spectrum=True, y_reference=func)

We then train the model for 10000 iterations:

losshistory, train_state = model.train(epochs=10000, callbacks=[checkpointer, movie]

)

Complete code

"""Backend supported: tensorflow.compat.v1, tensorflow, pytorch

Documentation: https://deepxde.readthedocs.io/en/latest/demos/poisson.1d.dirichlet.→˓html"""import deepxde as ddeimport matplotlib.pyplot as pltimport numpy as np# Import tf if using backend tensorflow.compat.v1 or tensorflowfrom deepxde.backend import tf# Import torch if using backend pytorch# import torch

def pde(x, y):dy_xx = dde.grad.hessian(y, x)# Use tf.sin for backend tensorflow.compat.v1 or tensorflowreturn -dy_xx - np.pi ** 2 * tf.sin(np.pi * x)# Use torch.sin for backend pytorch# return -dy_xx - np.pi ** 2 * torch.sin(np.pi * x)

def boundary(x, on_boundary):return on_boundary

def func(x):return np.sin(np.pi * x)

geom = dde.geometry.Interval(-1, 1)bc = dde.DirichletBC(geom, func, boundary)data = dde.data.PDE(geom, pde, bc, 16, 2, solution=func, num_test=100)

layer_size = [1] + [50] * 3 + [1]activation = "tanh"initializer = "Glorot uniform"net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)model.compile("adam", lr=0.001, metrics=["l2 relative error"])

(continues on next page)

2.2. Demos of Forward Problems 9

Page 14: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

(continued from previous page)

losshistory, train_state = model.train(epochs=10000)# Optional: Save the model during training.# checkpointer = dde.callbacks.ModelCheckpoint(# "model/model.ckpt", verbose=1, save_better_only=True# )# Optional: Save the movie of the network solution during training.# ImageMagick (https://imagemagick.org/) is required to generate the movie.# movie = dde.callbacks.MovieDumper(# "model/movie", [-1], [1], period=100, save_spectrum=True, y_reference=func# )# losshistory, train_state = model.train(epochs=10000, callbacks=[checkpointer,→˓movie])

dde.saveplot(losshistory, train_state, issave=True, isplot=True)

# Optional: Restore the saved model with the smallest training loss# model.restore("model/model.ckpt-" + str(train_state.best_step), verbose=1)# Plot PDE residualx = geom.uniform_points(1000, True)y = model.predict(x, operator=pde)plt.figure()plt.plot(x, y)plt.xlabel("x")plt.ylabel("PDE residual")plt.show()

• Poisson equation in 1D with Dirichlet/Neumann boundary conditions

• Poisson equation in 1D with Dirichlet/Robin boundary conditions

• Poisson equation in 1D with the multi-scale Fourier feature architecture

• Poisson equation in 1D with Dirichlet/periodic boundary conditions

• Poisson equation over an L-shaped domain

• Laplace equation on a disk

• Euler beam

2.2.3 Time-dependent PDEs

Burgers equation

Problem setup

We will solve a Burgers equation:

𝑑𝑢

𝑑𝑡+ 𝑢

𝑑𝑢

𝑑𝑥= 𝜈

𝑑𝑢

𝑑𝑥2, 𝑥 ∈ [−1, 1], 𝑡 ∈ [0, 1]

with the Dirichlet boundary conditions and initial conditions

𝑢(−1, 𝑡) = 𝑢(1, 𝑡) = 0, 𝑢(𝑥, 0) = − sin(𝜋𝑥).

The reference solution is here.

10 Chapter 2. User guide

Page 15: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

Implementation

This description goes through the implementation of a solver for the above described Burgers equation step-by-step.

First, the DeepXDE and TensorFlow (tf) modules are imported:

import deepxde as ddefrom deepxde.backend import tf

We begin by defining a computational geometry and time domain. We can use a built-in class Interval andTimeDomain and we combine both the domains using GeometryXTime as follows

geom = dde.geometry.Interval(-1, 1)timedomain = dde.geometry.TimeDomain(0, 0.99)geomtime = dde.geometry.GeometryXTime(geom, timedomain)

Next, we express the PDE residual of the Burgers equation:

def pde(x, y):dy_x = dde.grad.jacobian(y, x, i=0, j=0)dy_t = dde.grad.jacobian(y, x, i=0, j=1)dy_xx = dde.grad.hessian(y, x, i=0, j=0)return dy_t + y * dy_x - 0.01 / np.pi * dy_xx

The first argument to pde is 2-dimensional vector where the first component(x[:,0]) is 𝑥-coordinate and the secondcomponenet (x[:,1]) is the 𝑡-coordinate. The second argument is the network output, i.e., the solution 𝑢(𝑥, 𝑡), buthere we use y as the name of the variable.

Next, we consider the boundary/initial condition. on_boundary is chosen here to use the whole boundary of thecomputational domain in considered as the boundary condition. We include the geotime space , time geometrycreated above and on_boundary as the BCs in the DirichletBC function of DeepXDE. We also define ICwhich is the inital condition for the burgers equation and we use the computational domain, initial function, andon_initial to specify the IC.

bc = dde.DirichletBC(geomtime, lambda x: 0, lambda _, on_boundary: on_boundary)ic = dde.IC(geomtime, lambda x: -np.sin(np.pi * x[:, 0:1]), lambda _, on_initial: on_→˓initial)

Now, we have specified the geometry, PDE residual, and boundary/initial condition. We then define the TimePDEproblem as

data = dde.data.TimePDE(geomtime, pde, [bc, ic],num_domain=2540, num_boundary=80, num_initial=160)

The number 2540 is the number of training residual points sampled inside the domain, and the number 80 is the numberof training points sampled on the boundary. We also include 160 initial residual points for the initial conditions.

Next, we choose the network. Here, we use a fully connected neural network of depth 4 (i.e., 3 hidden layers) andwidth 20:

net = dde.maps.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")

Now, we have the PDE problem and the network. We bulid a Model and choose the optimizer and learning rate:

model = dde.Model(data, net)model.compile("adam", lr=1e-3)

We then train the model for 15000 iterations:

2.2. Demos of Forward Problems 11

Page 16: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

losshistory, train_state = model.train(epochs=15000)

After we train the network using Adam, we continue to train the network using L-BFGS to achieve a smaller loss:

model.compile("L-BFGS-B")losshistory, train_state = model.train()

Complete code

"""Backend supported: tensorflow.compat.v1, tensorflow, pytorch

Documentation: https://deepxde.readthedocs.io/en/latest/demos/burgers.html"""import deepxde as ddeimport numpy as np

def gen_testdata():data = np.load("dataset/Burgers.npz")t, x, exact = data["t"], data["x"], data["usol"].Txx, tt = np.meshgrid(x, t)X = np.vstack((np.ravel(xx), np.ravel(tt))).Ty = exact.flatten()[:, None]return X, y

def pde(x, y):dy_x = dde.grad.jacobian(y, x, i=0, j=0)dy_t = dde.grad.jacobian(y, x, i=0, j=1)dy_xx = dde.grad.hessian(y, x, i=0, j=0)return dy_t + y * dy_x - 0.01 / np.pi * dy_xx

geom = dde.geometry.Interval(-1, 1)timedomain = dde.geometry.TimeDomain(0, 0.99)geomtime = dde.geometry.GeometryXTime(geom, timedomain)

bc = dde.DirichletBC(geomtime, lambda x: 0, lambda _, on_boundary: on_boundary)ic = dde.IC(

geomtime, lambda x: -np.sin(np.pi * x[:, 0:1]), lambda _, on_initial: on_initial)

data = dde.data.TimePDE(geomtime, pde, [bc, ic], num_domain=2540, num_boundary=80, num_initial=160

)net = dde.maps.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")model = dde.Model(data, net)

model.compile("adam", lr=1e-3)model.train(epochs=15000)model.compile("L-BFGS")losshistory, train_state = model.train()dde.saveplot(losshistory, train_state, issave=True, isplot=True)

X, y_true = gen_testdata()

(continues on next page)

12 Chapter 2. User guide

Page 17: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

(continued from previous page)

y_pred = model.predict(X)f = model.predict(X, operator=pde)print("Mean residual:", np.mean(np.absolute(f)))print("L2 relative error:", dde.metrics.l2_relative_error(y_true, y_pred))np.savetxt("test.dat", np.hstack((X, y_true, y_pred)))

• Diffusion equation

• Diffusion equation with hard initial and boundary conditions

• Diffusion equation with training points resampling

• Heat equation

• Burgers equation with residual-based adaptive refinement (RAR)

• Beltrami flow

• Kovasznay flow

• Wave propagation with spatio-temporal multi-scale Fourier feature architecture

2.2.4 Integro-differential equations

• Integro-differential equation

• Volterra IDE

2.2.5 fractional PDEs

• fractional Poisson equation in 1D

• fractional Poisson equation in 2D

• fractional Poisson equation in 3D

• fractional diffusion equation

2.3 Demos of Inverse Problems

Here are some demos of solving inverse problems of PDEs.

2.3.1 ODEs

Inverse problem for the Lorenz system

Problem setup

Implementation

Complete code

Jupyter notebook

2.3. Demos of Inverse Problems 13

Page 18: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

"""Backend supported: tensorflow.compat.v1, tensorflow, pytorch

Documentation: https://deepxde.readthedocs.io/en/latest/demos/lorenz.inverse.html"""import deepxde as ddeimport numpy as np

def gen_traindata():data = np.load("dataset/Lorenz.npz")return data["t"], data["y"]

C1 = dde.Variable(1.0)C2 = dde.Variable(1.0)C3 = dde.Variable(1.0)

def Lorenz_system(x, y):"""Lorenz system.dy1/dx = 10 * (y2 - y1)dy2/dx = y1 * (28 - y3) - y2dy3/dx = y1 * y2 - 8/3 * y3"""y1, y2, y3 = y[:, 0:1], y[:, 1:2], y[:, 2:]dy1_x = dde.grad.jacobian(y, x, i=0)dy2_x = dde.grad.jacobian(y, x, i=1)dy3_x = dde.grad.jacobian(y, x, i=2)return [

dy1_x - C1 * (y2 - y1),dy2_x - y1 * (C2 - y3) + y2,dy3_x - y1 * y2 + C3 * y3,

]

def boundary(_, on_initial):return on_initial

geom = dde.geometry.TimeDomain(0, 3)

# Initial conditionsic1 = dde.IC(geom, lambda X: -8, boundary, component=0)ic2 = dde.IC(geom, lambda X: 7, boundary, component=1)ic3 = dde.IC(geom, lambda X: 27, boundary, component=2)

# Get the train dataobserve_t, ob_y = gen_traindata()observe_y0 = dde.PointSetBC(observe_t, ob_y[:, 0:1], component=0)observe_y1 = dde.PointSetBC(observe_t, ob_y[:, 1:2], component=1)observe_y2 = dde.PointSetBC(observe_t, ob_y[:, 2:3], component=2)

data = dde.data.PDE(geom,Lorenz_system,[ic1, ic2, ic3, observe_y0, observe_y1, observe_y2],num_domain=400,

(continues on next page)

14 Chapter 2. User guide

Page 19: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

(continued from previous page)

num_boundary=2,anchors=observe_t,

)

net = dde.maps.FNN([1] + [40] * 3 + [3], "tanh", "Glorot uniform")model = dde.Model(data, net)model.compile("adam", lr=0.001, external_trainable_variables=[C1, C2, C3])variable = dde.callbacks.VariableValue(

[C1, C2, C3], period=600, filename="variables.dat")losshistory, train_state = model.train(epochs=60000, callbacks=[variable])dde.saveplot(losshistory, train_state, issave=True, isplot=True)

• Inverse problem for the Lorenz system with exogenous input

2.3.2 PDEs

• Inverse problem for the diffusion equation

• Inverse Problem for the diffusion-reaction system

• Inverse problem for the Poisson equation with unknown forcing field

2.3.3 fractional PDEs

• Inverse problem for the fractional Poisson equation in 1D

• Inverse problem for the fractional Poisson equation in 2D

2.4 Demos of Function Approximation

Here are some demos of learning functions.

2.4.1 Function approximation

• Learning a function from a dataset

• Learning a function from the formula

2.4.2 Uncertainty quantification

• Learning a function with uncertainty quantification

2.4.3 Multi-fidelity learning

• Multi-fidelity learning from the formulas

• Multi-fidelity learning from a dataset

2.4. Demos of Function Approximation 15

Page 20: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

2.5 FAQ

If you have any questions about DeepXDE, first read the papers/slides and watch the video at the DeepXDE homepageand also check the following list of frequently asked DeepXDE questions. To get further help, you can open an issuein the GitHub “Issues” section.

• Q: DeepXDE failed to run.A: #2, #3, #5

• Q: What is the output of DeepXDE? How can I visualize the results?A: #4, #9, #17, #48, #53, #73, #77, #171, #217, #218, #223

• Q: More details and examples about geometry.A: #32, #38, #161

• Q: How can I implement new ODEs/PDEs, e.g., compute derivatives?A: #12, #13, #21, #22, #74, #78, #79, #124, #172, #185, #193, #194, #246

• Q: How can I implement new IDEs?A: #95, #198

• Q: More details and examples about initial conditions.A: #19, #75, #104, #134

• Q: More details and examples about boundary conditions.A: #6, #10, #15, #16, #22, #26, #33, #38, #40, #44, #49, #115, #140, #156

• Q: By default, initial/boundary conditions are enforced in DeepXDE as soft constraints. How can I enforcethem as hard constraints?A: #36, #90, #92, #252

• Q: Define an inverse problem to solve unknown parameters/fields in the PDEs or initial/boundary conditions.A: #55, #76, #86, #114, #120, #125, #178, #208, #235

• Q: How does DeepXDE choose the training points? How can I use some specific training points?A: #32, #57, #64

• Q: How can I give different weights to different residual points?A: #45

• Q: I want to have more control over network training.A: #166

• Q: I failed to train the network or get the right solution, e.g., the training loss is large.A: #15, #22, #33, #41, #61, #62, #80, #84, #85, #108, #126, #141, #188, #247

• Q: How can I use a trained model for new predictions?A: #10, #18, #93, #177

• Q: How can I save a trained model and then load the model later?A: #54, #57, #58, #63, #103, #206, #254

• Q: Residual-based adaptive refinement (RAR).A: #63

• Q: By default, DeepXDE uses float32. How can I use float64?A: #28

• Q: More details about DeepXDE source code, and want to modify DeepXDE.A: #35, #39, #66, #68, #69, #91, #99, #131, #163, #175, #202

16 Chapter 2. User guide

Page 21: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

• Q: Examples collected from users.A: Lotka–Volterra, Potential flow around a cylinder, Laminar Incompressible flow passing a step, Shallowwater equations

• Q: Questions about multi-fidelity neutral networks.A: #94, #195

2.6 Research

Here is a list of research papers that used DeepXDE. If you would like your paper to appear here, open an issue in theGitHub “Issues” section.

2.6.1 PINN

1. S. Lee, & T. Kadeethum. Physics-informed neural networks for solving coupled flow and transport system.2021.

2. Y. Chen, & L. Dal Negro. Physics-informed neural networks for imaging and parameter retrieval of photonicnanostructures from near-field data. arXiv preprint arXiv:2109.12754, 2021.

3. A. M. Ncube, G. E. Harmsen, & A. S. Cornell. Investigating a new approach to quasinormal modes: Physics-informed neural networks. arXiv preprint arXiv:2108.05867, 2021.

4. M. Almajid, & M. Abu-Alsaud. Prediction of porous media fluid flow using physics informed neural networks.Journal of Petroleum Science and Engineering, 109205, 2021.

5. M. Merkle. Boosting the training of physics-informed neural networks with transfer learning. 2021. [Code]

6. L. Lu, R. Pestourie, W. Yao, Z. Wang, F. Verdugo, & S. G. Johnson. Physics-informed neural networks withhard constraints for inverse design. arXiv preprint arXiv:2102.04626, 2021. [Code]

7. L. Lu, X. Meng, Z. Mao, & G. E. Karniadakis. DeepXDE: A deep learning library for solving differentialequations. SIAM Review, 63(1), 208–228, 2021. [Code]

8. A. Yazdani, L. Lu, M. Raissi, & G. E. Karniadakis. Systems biology informed deep learning for inferringparameters and hidden dynamics. PLoS Computational Biology, 16(11), e1007575, 2020. [Code]

9. Q. Zhang, Y. Chen, & Z. Yang. Data driven solutions and discoveries in mechanics using physics informedneural network. Preprints, 2020060258, 2020.

10. Y. Chen, L. Lu, G. E. Karniadakis, & L. D. Negro. Physics-informed neural networks for inverse problemsin nano-optics and metamaterials. Optics Express, 28(8), 11618–11633, 2020.

11. G. Pang, L. Lu, & G. E. Karniadakis. fPINNs: Fractional physics-informed neural networks. SIAM Journalon Scientific Computing, 41(4), A2603–A2626, 2019. [Code]

12. D. Zhang, L. Lu, L. Guo, & G. E. Karniadakis. Quantifying total uncertainty in physics-informed neuralnetworks for solving forward and inverse stochastic problems. Journal of Computational Physics, 397,108850, 2019.

2.6.2 DeepONet

1. 26. Mao, L. Lu, O. Marxen, T. A. Zaki, & G. E. Karniadakis. DeepM&Mnet for hypersonics: Predictingthe coupled flow and finite-rate chemistry behind a normal shock using neural-network approximation ofoperators. Journal of Computational Physics, 447, 110698, 2021.

2.6. Research 17

Page 22: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

2. P. Clark Di Leoni, L. Lu, C. Meneveau, G. E. Karniadakis, & T. A. Zaki. DeepONet prediction of linearinstability waves in high-speed boundary layers. arXiv preprint arXiv:2105.08697, 2021.

3. S. Cai, Z. Wang, L. Lu, T. A. Zaki, & G. E. Karniadakis. DeepM&Mnet: Inferring the electroconvection mul-tiphysics fields based on operator approximation by neural networks. Journal of Computational Physics,436, 110296, 2021.

4. L. Lu, P. Jin, G. Pang, Z. Zhang, & G. E. Karniadakis. Learning nonlinear operators via DeepONet based onthe universal approximation theorem of operators. Nature Machine Intelligence, 3, 218–229, 2021. [Code]

5. C. Lin, Z. Li, L. Lu, S. Cai, M. Maxey, & G. E. Karniadakis. Operator learning for predicting multiscalebubble growth dynamics. The Journal of Chemical Physics, 154(10), 104118, 2021.

2.6.3 Multi-fidelity NN

1. L. Lu, M. Dao, P. Kumar, U. Ramamurty, G. E. Karniadakis, & S. Suresh. Extraction of mechanical propertiesof materials through deep learning from instrumented indentation. Proceedings of the National Academyof Sciences, 117(13), 7052–7062, 2020. [Code]

2. X. Meng, & G. E. Karniadakis. A composite neural network that learns from multi-fidelity data: Applicationto function approximation and inverse PDE problems. Journal of Computational Physics, 401, 109020,2020.

2.7 Cite DeepXDE

If you use DeepXDE for academic research, you are encouraged to cite the following paper:

@article{lu2021deepxde,author = {Lu, Lu and Meng, Xuhui and Mao, Zhiping and Karniadakis, George Em},title = {{DeepXDE}: A deep learning library for solving differential equations},journal = {SIAM Review},volume = {63},number = {1},pages = {208-228},year = {2021},doi = {10.1137/19M1274067}

}

2.8 The Team

DeepXDE was originally developed by Lu Lu at the Brown University under the supervision of Prof. George Karni-adakis, supported by PhILMs.

DeepXDE is currently maintained by Lu Lu at University of Pennsylvania with major contributions coming from sev-eral talented individuals in various forms and means. A non-exhaustive but growing list needs to mention: ShunyuanMao, Zongren Zou.

18 Chapter 2. User guide

Page 23: DeepXDE Documentation

CHAPTER 3

API reference

If you are looking for information on a specific function, class or method, this part of the documentation is for you.

3.1 deepxde

3.1.1 deepxde.callbacks module

class deepxde.callbacks.CallbackBases: object

Callback base class.

modelinstance of Model. Reference of the model being trained.

init()Init after setting a model.

on_batch_begin()Called at the beginning of every batch.

on_batch_end()Called at the end of every batch.

on_epoch_begin()Called at the beginning of every epoch.

on_epoch_end()Called at the end of every epoch.

on_predict_begin()Called at the beginning of prediction.

on_predict_end()Called at the end of prediction.

19

Page 24: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

on_train_begin()Called at the beginning of model training.

on_train_end()Called at the end of model training.

set_model(model)

class deepxde.callbacks.CallbackList(callbacks=None)Bases: deepxde.callbacks.Callback

Container abstracting a list of callbacks.

Parameters callbacks – List of Callback instances.

append(callback)

on_batch_begin()Called at the beginning of every batch.

on_batch_end()Called at the end of every batch.

on_epoch_begin()Called at the beginning of every epoch.

on_epoch_end()Called at the end of every epoch.

on_predict_begin()Called at the beginning of prediction.

on_predict_end()Called at the end of prediction.

on_train_begin()Called at the beginning of model training.

on_train_end()Called at the end of model training.

set_model(model)

class deepxde.callbacks.DropoutUncertainty(period=1000)Bases: deepxde.callbacks.Callback

Uncertainty estimation via MC dropout.

Reference: https://arxiv.org/abs/1506.02142

Warning: This cannot be used together with other techniques that have different behaviors during trainingand testing, such as batch normalization.

on_epoch_end()Called at the end of every epoch.

on_train_end()Called at the end of model training.

class deepxde.callbacks.EarlyStopping(min_delta=0, patience=0, baseline=None)Bases: deepxde.callbacks.Callback

20 Chapter 3. API reference

Page 25: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

Stop training when a monitored quantity (training loss) has stopped improving. Only checked at validation stepaccording to display_every in Model.train.

Parameters

• min_delta – Minimum change in the monitored quantity to qualify as an improvement,i.e. an absolute change of less than min_delta, will count as no improvement.

• patience – Number of epochs with no improvement after which training will be stopped.

• baseline – Baseline value for the monitored quantity to reach. Training will stop if themodel doesn’t show improvement over the baseline.

get_monitor_value()

on_epoch_end()Called at the end of every epoch.

on_train_begin()Called at the beginning of model training.

on_train_end()Called at the end of model training.

class deepxde.callbacks.FirstDerivative(x, component_x=0, component_y=0)Bases: deepxde.callbacks.OperatorPredictor

Generates the first order derivative of the outputs with respect to the inputs.

Parameters x – The input data.

class deepxde.callbacks.ModelCheckpoint(filepath, verbose=0, save_better_only=False, pe-riod=1)

Bases: deepxde.callbacks.Callback

Save the model after every epoch.

Parameters

• filepath (string) – Path to save the model file.

• verbose – Verbosity mode, 0 or 1.

• save_better_only – If True, only save a better model according to the quantity moni-tored. Model is only checked at validation step according to display_every in Model.train.

• period – Interval (number of epochs) between checkpoints.

on_epoch_end()Called at the end of every epoch.

class deepxde.callbacks.MovieDumper(filename, x1, x2, num_points=100, period=1, compo-nent=0, save_spectrum=False, y_reference=None)

Bases: deepxde.callbacks.Callback

Dump a movie to show the training progress of the function along a line.

Parameters spectrum – If True, dump the spectrum of the Fourier transform.

init()Init after setting a model.

on_epoch_end()Called at the end of every epoch.

3.1. deepxde 21

Page 26: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

on_train_begin()Called at the beginning of model training.

on_train_end()Called at the end of model training.

class deepxde.callbacks.OperatorPredictor(x, op)Bases: deepxde.callbacks.Callback

Generates operator values for the input samples.

Parameters

• x – The input data.

• op – The operator with inputs (x, y).

get_value()

init()Init after setting a model.

on_predict_end()Called at the end of prediction.

class deepxde.callbacks.PDEResidualResampler(period=100)Bases: deepxde.callbacks.Callback

Resample the training points for PDE losses every given period.

on_epoch_end()Called at the end of every epoch.

on_train_begin()Called at the beginning of model training.

class deepxde.callbacks.Timer(available_time)Bases: deepxde.callbacks.Callback

Stop training when training time reaches the threshold. This Timer starts after the first call of on_train_begin.

Parameters available_time (float) – Total time (in minutes) available for the training.

on_epoch_end()Called at the end of every epoch.

on_train_begin()Called at the beginning of model training.

class deepxde.callbacks.VariableValue(var_list, period=1, filename=None, precision=2)Bases: deepxde.callbacks.Callback

Get the variable values.

Parameters

• var_list – A TensorFlow Variable or a list of TensorFlow Variable.

• period (int) – Interval (number of epochs) between checking values.

• filename (string) – Output the values to the file filename. The file is kept open toallow instances to be re-used. If None, output to the screen.

• precision (int) – The precision of variables to display.

get_value()Return the variable values.

22 Chapter 3. API reference

Page 27: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

on_epoch_end()Called at the end of every epoch.

on_train_begin()Called at the beginning of model training.

3.1.2 deepxde.config module

deepxde.config.default_float()Returns the default float type, as a string.

deepxde.config.set_default_float(value)Sets the default float type.

The default floating point type is ‘float32’.

Parameters value (String) – ‘float32’ or ‘float64’.

3.1.3 deepxde.gradients module

deepxde.gradients.clear()Clear cached Jacobians and Hessians.

deepxde.gradients.hessian(ys, xs, component=None, i=0, j=0, grad_y=None)Compute Hessian matrix H: H[i][j] = d^2y / dx_i dx_j, where i,j=0,. . . ,dim_x-1.

Use this function to compute second-order derivatives instead of tf.gradients() or torch.autograd.grad(), because

• It is lazy evaluation, i.e., it only computes H[i][j] when needed.

• It will remember the gradients that have already been computed to avoid duplicate computation.

Parameters

• ys – Output Tensor of shape (batch_size, dim_y).

• xs – Input Tensor of shape (batch_size, dim_x).

• component – If dim_y > 1, then ys[:, component] is used as y to compute the Hessian. Ifdim_y = 1, component must be None.

• i (int) –

• j (int) –

• grad_y – The gradient of y w.r.t. xs. Provide grad_y if known to avoid duplicate compu-tation. grad_y can be computed from jacobian. Even if you do not provide grad_y, thereis no duplicate computation if you use jacobian to compute first-order derivatives.

Returns H[i][j].

deepxde.gradients.jacobian(ys, xs, i=0, j=None)Compute Jacobian matrix J: J[i][j] = dy_i / dx_j, where i = 0, . . . , dim_y - 1 and j = 0, . . . , dim_x - 1.

Use this function to compute first-order derivatives instead of tf.gradients() or torch.autograd.grad(), because

• It is lazy evaluation, i.e., it only computes J[i][j] when needed.

• It will remember the gradients that have already been computed to avoid duplicate computation.

3.1. deepxde 23

Page 28: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

Parameters

• ys – Output Tensor of shape (batch_size, dim_y).

• xs – Input Tensor of shape (batch_size, dim_x).

• i (int) –

• j (int or None) –

Returns J[i][j] in Jacobian matrix J. If j is None, returns the gradient of y_i, i.e., J[i].

3.1.4 deepxde.losses module

deepxde.losses.get(identifier)

deepxde.losses.mean_absolute_error(y_true, y_pred)

deepxde.losses.mean_absolute_percentage_error(y_true, y_pred)

deepxde.losses.mean_squared_error(y_true, y_pred)

deepxde.losses.softmax_cross_entropy(y_true, y_pred)

deepxde.losses.zero(*_)

3.1.5 deepxde.metrics module

deepxde.metrics.absolute_percentage_error_std(y_true, y_pred)

deepxde.metrics.accuracy(y_true, y_pred)

deepxde.metrics.get(identifier)

deepxde.metrics.l2_relative_error(y_true, y_pred)

deepxde.metrics.max_absolute_percentage_error(y_true, y_pred)

deepxde.metrics.mean_absolute_percentage_error(y_true, y_pred)

deepxde.metrics.mean_l2_relative_error(y_true, y_pred)Compute the average of L2 relative error along the first axis.

deepxde.metrics.mean_squared_error(y_true, y_pred)

deepxde.metrics.nanl2_relative_error(y_true, y_pred)Return the L2 relative error treating Not a Numbers (NaNs) as zero.

3.1.6 deepxde.model module

class deepxde.model.Model(data, net)Bases: object

A Model trains a NN on a Data.

Parameters

• data – deepxde.data.Data instance.

• net – deepxde.nn.NN instance.

24 Chapter 3. API reference

Page 29: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

compile(optimizer, lr=None, loss=’MSE’, metrics=None, decay=None, loss_weights=None, exter-nal_trainable_variables=None)

Configures the model for training.

Parameters

• optimizer – String. Name of optimizer.

• lr – A Tensor or a floating point value. The learning rate. For L-BFGS, usedde.optimizers.set_LBFGS_options to set the hyperparameters.

• loss – If the same loss is used for all errors, then loss is a String (name of objectivefunction) or objective function. If different errors use different losses, then loss is a listwhose size is equal to the number of errors.

• metrics – List of metrics to be evaluated by the model during training.

• decay – Tuple. Name and parameters of decay to the initial learning rate. One of thefollowing options:

– inverse time decay: (“inverse time”, decay_steps, decay_rate)

– cosine decay: (“cosine”, decay_steps, alpha)

• loss_weights – A list specifying scalar coefficients (Python floats) to weight the losscontributions. The loss value that will be minimized by the model will then be the weightedsum of all individual losses, weighted by the loss_weights coefficients.

• external_trainable_variables – A trainable tf.Variable object or alist of trainable tf.Variable objects. The unknown parameters in the physicssystems that need to be recovered. If the backend is tensorflow.compat.v1, exter-nal_trainable_variables is ignored, and all trainable tf.Variable objects are auto-matically collected.

predict(x, operator=None, callbacks=None)Generates output predictions for the input samples.

print_model()Prints all trainable variables.

restore(save_path, verbose=0)Restore all variables from a disk file.

save(save_path, protocol=’tf.train.Saver’, verbose=0)Saves all variables to a disk file.

Parameters protocol (string) – If protocol is “tf.train.Saver”, save using tf.train.Save. Ifprotocol is “pickle”, save using the Python pickle module. Only “tf.train.Saver” protocolsupports restore().

state_dict()Returns a dictionary containing all variables.

train(epochs=None, batch_size=None, display_every=1000, disregard_previous_best=False, call-backs=None, model_restore_path=None, model_save_path=None)

Trains the model for a fixed number of epochs (iterations on a dataset).

Parameters

• epochs – Integer. Number of iterations to train the model. Note: It is the number ofiterations, not the number of epochs.

3.1. deepxde 25

Page 30: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

• batch_size – Integer or None. If you solve PDEs via dde.data.PDE or dde.data.TimePDE, do not use batch_size, and instead usedde.callbacks.PDEResidualResampler, see an example.

• display_every – Integer. Print the loss and metrics every this steps.

• disregard_previous_best – If True, disregard the previous saved best model.

• callbacks – List of dde.callbacks.Callback instances. List of callbacks toapply during training.

• model_restore_path – String. Path where parameters were previously saved. Seesave_path in tf.train.Saver.restore.

• model_save_path – String. Prefix of filenames created for the checkpoint. Seesave_path in tf.train.Saver.save.

class deepxde.model.TrainStateBases: object

disregard_best()

packed_data()

set_data_test(X_test, y_test, test_aux_vars=None)

set_data_train(X_train, y_train, train_aux_vars=None)

update_best()

class deepxde.model.LossHistoryBases: object

append(step, loss_train, loss_test, metrics_test)

set_loss_weights(loss_weights)

3.1.7 deepxde.postprocessing module

deepxde.postprocessing.plot_best_state(train_state)Plot the best result of the smallest training loss.

This function only works for 1D and 2D problems. For other problems and to better customize the figure, usesave_best_state().

Note: You need to call plt.show() to show the figure.

Parameters train_state – TrainState instance. The second variable returned fromModel.train().

deepxde.postprocessing.plot_loss_history(loss_history, fname=None)Plot the training and testing loss history.

Note: You need to call plt.show() to show the figure.

Parameters

26 Chapter 3. API reference

Page 31: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

• loss_history – LossHistory instance. The first variable returned from Model.train().

• fname (string) – If fname is a string (e.g., ‘loss_history.png’), then save the figure to thefile of the file name fname.

deepxde.postprocessing.save_best_state(train_state, fname_train, fname_test)Save the best result of the smallest training loss to a file.

deepxde.postprocessing.save_loss_history(loss_history, fname)Save the training and testing loss history to a file.

deepxde.postprocessing.saveplot(loss_history, train_state, issave=True, isplot=True,loss_fname=’loss.dat’, train_fname=’train.dat’,test_fname=’test.dat’, output_dir=None)

Save/plot the best trained result and loss history.

This function is used to quickly check your results. To better investigate your result, usesave_loss_history() and save_best_state().

Parameters output_dir (string) – If None, use the current working directory.

3.1.8 deepxde.real module

class deepxde.real.Real(precision)Bases: object

set_float32()

set_float64()

3.2 deepxde.data

3.2.1 deepxde.data.constraint module

class deepxde.data.constraint.Constraint(constraint, train_x, test_x)Bases: deepxde.data.data.Data

General constraints.

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

3.2.2 deepxde.data.data module

class deepxde.data.data.DataBases: object

Data base class.

3.2. deepxde.data 27

Page 32: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

class deepxde.data.data.Tuple(train_x, train_y, test_x, test_y)Bases: deepxde.data.data.Data

Dataset with each data point as a tuple.

Each data tuple is split into two parts: input tuple (x) and output tuple (y).

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

3.2.3 deepxde.data.dataset module

class deepxde.data.dataset.DataSet(X_train=None, y_train=None, X_test=None,y_test=None, fname_train=None, fname_test=None,col_x=None, col_y=None, standardize=False)

Bases: deepxde.data.data.Data

Fitting Data set.

Parameters

• col_x – List of integers.

• col_y – List of integers.

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

transform_inputs(x)

3.2.4 deepxde.data.fpde module

class deepxde.data.fpde.FPDE(geometry, fpde, alpha, bcs, resolution, meshtype=’dynamic’,num_domain=0, num_boundary=0, train_distribution=’Sobol’, an-chors=None, solution=None, num_test=None)

Bases: deepxde.data.pde.PDE

Fractional PDE solver.

D-dimensional fractional Laplacian of order alpha/2 (1 < alpha < 2) is defined as: (-Delta)^(alpha/2) u(x)= C(alpha, D) int_{||theta||=1} D_theta^alpha u(x) d theta, where C(alpha, D) = gamma((1-alpha)/2) *

28 Chapter 3. API reference

Page 33: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

gamma((D+alpha)/2) / (2 pi^((D+1)/2)), D_theta^alpha is the Riemann-Liouville directional fractional deriva-tive, and theta is the differentiation direction vector. The solution u(x) is assumed to be identically zero in theboundary and exterior of the domain. When D = 1, C(alpha, D) = 1 / (2 cos(alpha * pi / 2)).

This solver does not consider C(alpha, D) in the fractional Laplacian, and only discretizes int_{||theta||=1}D_theta^alpha u(x) d theta. D_theta^alpha is approximated by Grunwald-Letnikov formula.

get_int_matrix(training)

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

test_points()

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

class deepxde.data.fpde.Fractional(alpha, geom, disc, x0)Bases: object

Fractional derivative.

Parameters x0 – If disc.meshtype = static, then x0 should be None; if disc.meshtype = 'dynamic', then x0 are non-boundary points.

dynamic_dist2npts(dx)

get_matrix(sparse=False)

get_matrix_dynamic(sparse)

get_matrix_static()

get_weight(n)

get_x()

get_x_dynamic()

get_x_static()

modify_first_order(x, w)

modify_second_order(x=None, w=None)

modify_third_order(x=None, w=None)

class deepxde.data.fpde.FractionalTime(alpha, geom, tmin, tmax, disc, nt, x0)Bases: object

Fractional derivative with time.

Parameters

• nt – If disc.meshtype = static, then nt is the number of t points; if disc.meshtype = 'dynamic', then nt is None.

• x0 – If disc.meshtype = static, then x0 should be None; if disc.meshtype ='dynamic', then x0 are non-boundary points.

nxIf disc.meshtype = static, then nx is the number of x points; if disc.meshtype =dynamic, then nx is the resolution lambda.

3.2. deepxde.data 29

Page 34: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

get_matrix(sparse=False)

get_matrix_dynamic(sparse)

get_matrix_static()

get_x()

get_x_dynamic()

get_x_static()

class deepxde.data.fpde.Scheme(meshtype, resolution)Bases: object

Fractional Laplacian discretization.

Discretize fractional Laplacian uisng quadrature rule for the integral with respect to the directions and Grunwald-Letnikov (GL) formula for the Riemann-Liouville directional fractional derivative.

Parameters

• meshtype (string) – “static” or “dynamic”.

• resolution – A list of integer. The first number is the number of quadrature points inthe first direction, . . . , and the last number is the GL parameter.

class deepxde.data.fpde.TimeFPDE(geometryxtime, fpde, alpha, ic_bcs, resolution,meshtype=’dynamic’, num_domain=0, num_boundary=0,num_initial=0, train_distribution=’Sobol’, anchors=None,solution=None, num_test=None)

Bases: deepxde.data.fpde.FPDE

Time-dependent fractional PDE solver.

D-dimensional fractional Laplacian of order alpha/2 (1 < alpha < 2) is defined as: (-Delta)^(alpha/2) u(x)= C(alpha, D) int_{||theta||=1} D_theta^alpha u(x) d theta, where C(alpha, D) = gamma((1-alpha)/2) *gamma((D+alpha)/2) / (2 pi^((D+1)/2)), D_theta^alpha is the Riemann-Liouville directional fractional deriva-tive, and theta is the differentiation direction vector. The solution u(x) is assumed to be identically zero in theboundary and exterior of the domain. When D = 1, C(alpha, D) = 1 / (2 cos(alpha * pi / 2)).

This solver does not consider C(alpha, D) in the fractional Laplacian, and only discretizes int_{||theta||=1}D_theta^alpha u(x) d theta. D_theta^alpha is approximated by Grunwald-Letnikov formula.

get_int_matrix(training)

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

train_points()

3.2.5 deepxde.data.func_constraint module

class deepxde.data.func_constraint.FuncConstraint(geom, constraint, func,num_train, anchors, num_test,dist_train=’uniform’)

Bases: deepxde.data.data.Data

Function approximation with constraints.

30 Chapter 3. API reference

Page 35: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

3.2.6 deepxde.data.function module

class deepxde.data.function.Function(geometry, function, num_train, num_test,train_distribution=’uniform’, online=False)

Bases: deepxde.data.data.Data

Approximate a function via a network.

Parameters

• geometry – The domain of the function. Instance of Geometry.

• function – The function to be approximated. A callable function takes a NumPy arrayas the input and returns the a NumPy array of corresponding function values.

• num_train (int) – The number of training points sampled inside the domain.

• num_test (int) –

• train_distribution (string) – The distribution to sample training points. Oneof the following: “uniform” (equispaced grid), “pseudo” (pseudorandom), “LHS” (Latinhypercube sampling), “Halton” (Halton sequence), “Hammersley” (Hammersley sequence),or “Sobol” (Sobol sequence).

• online (bool) – If True, resample the pseudorandom training points every training step,otherwise, use the same training points.

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

3.2.7 deepxde.data.helper module

deepxde.data.helper.one_function(dim_outputs)

deepxde.data.helper.zero_function(dim_outputs)

3.2.8 deepxde.data.ide module

class deepxde.data.ide.IDE(geometry, ide, bcs, quad_deg, kernel=None, num_domain=0,num_boundary=0, train_distribution=’Sobol’, anchors=None, solu-tion=None, num_test=None)

Bases: deepxde.data.pde.PDE

IDE solver.

3.2. deepxde.data 31

Page 36: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

The current version only supports 1D problems with the integral int_0^x K(x, t) y(t) dt.

Parameters kernel – (x, t) –> R.

get_int_matrix(training)

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

quad_points(X)

test()Return a test dataset.

test_points()

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

3.2.9 deepxde.data.mf module

class deepxde.data.mf.MfDataSet(X_lo_train=None, X_hi_train=None, y_lo_train=None,y_hi_train=None, X_hi_test=None, y_hi_test=None,fname_lo_train=None, fname_hi_train=None,fname_hi_test=None, col_x=None, col_y=None)

Bases: deepxde.data.data.Data

Multifidelity function approximation from data set.

Parameters

• col_x – List of integers.

• col_y – List of integers.

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

class deepxde.data.mf.MfFunc(geom, func_lo, func_hi, num_lo, num_hi, num_test,dist_train=’uniform’)

Bases: deepxde.data.data.Data

Multifidelity function approximation.

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

32 Chapter 3. API reference

Page 37: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

3.2.10 deepxde.data.pde module

class deepxde.data.pde.PDE(geometry, pde, bcs, num_domain=0, num_boundary=0,train_distribution=’Sobol’, anchors=None, exclusions=None, so-lution=None, num_test=None, auxiliary_var_function=None)

Bases: deepxde.data.data.Data

ODE or time-independent PDE solver.

Parameters

• geometry – Instance of Geometry.

• pde – A global PDE or a list of PDEs. None if no global PDE.

• bcs – A boundary condition or a list of boundary conditions. Use [] if no boundarycondition.

• num_domain (int) – The number of training points sampled inside the domain.

• num_boundary (int) – The number of training points sampled on the boundary.

• train_distribution (string) – The distribution to sample training points. Oneof the following: “uniform” (equispaced grid), “pseudo” (pseudorandom), “LHS” (Latinhypercube sampling), “Halton” (Halton sequence), “Hammersley” (Hammersley sequence),or “Sobol” (Sobol sequence).

• anchors – A Numpy array of training points, in addition to the num_domain andnum_boundary sampled points.

• exclusions – A Numpy array of points to be excluded for training.

• solution – The reference solution.

• num_test – The number of points sampled inside the domain for testing. The testingpoints on the boundary are the same set of points used for training. If None, then thetraining points will be used for testing.

• auxiliary_var_function – A function that inputs train_x or test_x and outputs aux-iliary variables.

Warning: The testing points include points inside the domain and points on the boundary, and they maynot have the same density, and thus the entire testing points may not be uniformly distributed. As a result, ifyou have a reference solution (solution) and would like to compute a metric such as

Model.compile(metrics=["l2 relative error"])

then the metric may not be very accurate. To better compute a metric, you can sample the points manually,and then use Model.predict() to predict the solution on thess points and compute the metric:

x = geom.uniform_points(num, boundary=True)y_true = ...y_pred = model.predict(x)error= dde.metrics.l2_relative_error(y_true, y_pred)

train_x_allA Numpy array of all points for training. train_x_all is unordered, and does not have duplication.

train_xA Numpy array of the points fed into the network for training. train_x is constructed from train_x_all,ordered from BCs to PDE, and may have duplicate points.

3.2. deepxde.data 33

Page 38: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

train_x_bcA Numpy array of the training points for BCs. train_x_bc is constructed from train_x_all at the first stepof training, by default it won’t be updated when train_x_all changes. To update train_x_bc, set it to Noneand call bc_points, and then update the loss function by model.compile().

num_bcsnum_bcs[i] is the number of points for bcs[i].

Type list

test_xA Numpy array of the points fed into the network for testing, ordered from BCs to PDE. The BC pointsare exactly the same points in train_x_bc.

train_aux_varsAuxiliary variables that associate with train_x.

test_aux_varsAuxiliary variables that associate with test_x.

add_anchors(anchors)Add new points for training PDE losses. The BC points will not be updated.

bc_points()

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

resample_train_points()Resample the training points for PDEs. The BC points will not be updated.

test()Return a test dataset.

test_points()

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

train_points()

class deepxde.data.pde.TimePDE(geometryxtime, pde, ic_bcs, num_domain=0, num_boundary=0,num_initial=0, train_distribution=’Sobol’, anchors=None,exclusions=None, solution=None, num_test=None, auxil-iary_var_function=None)

Bases: deepxde.data.pde.PDE

Time-dependent PDE solver.

Parameters num_initial (int) – The number of training points sampled on the initial location.

train_points()

3.2.11 deepxde.data.sampler module

class deepxde.data.sampler.BatchSampler(num_samples, shuffle=True)Bases: object

Samples a mini-batch of indices.

The indices are repeated indefinitely. Has the same effect as:

34 Chapter 3. API reference

Page 39: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

indices = tf.data.Dataset.range(num_samples)indices = indices.repeat().shuffle(num_samples).batch(batch_size)iterator = iter(indices)batch_indices = iterator.get_next()

However, tf.data.Dataset.__iter__() is only supported inside of tf.function or when eagerexecution is enabled. tf.data.Dataset.make_one_shot_iterator() supports graph mode, but istoo slow.

This class is not implemented as a Python Iterator, so that it can support dynamic batch size.

Parameters

• num_samples (int) – The number of samples.

• shuffle (bool) – Set to True to have the indices reshuffled at every epoch.

epochs_completed

get_next(batch_size)Returns the indices of the next batch.

Parameters batch_size (int) – The number of elements to combine in a single batch.

3.2.12 deepxde.data.triple module

class deepxde.data.triple.Triple(X_train, y_train, X_test, y_test)Bases: deepxde.data.data.Data

Dataset with each data point as a triple.

The couple of the first two elements are the input, and the third element is the output. This dataset can beused with the network DeepONet for operator learning. Reference: Lu et al. Learning nonlinear operators viaDeepONet based on the universal approximation theorem of operators. Nat Mach Intell, 2021.

Parameters

• X_train – A tuple of two NumPy arrays.

• y_train – A NumPy array.

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

class deepxde.data.triple.TripleCartesianProd(X_train, y_train, X_test, y_test)Bases: deepxde.data.data.Data

Dataset with each data point as a triple. The ordered pair of the first two elements are created from a Cartesianproduct of the first two lists. If we compute the Cartesian product of the first two arrays, then we have a Tripledataset.

This dataset can be used with the network DeepONetCartesianProd for operator learning.

Parameters

• X_train – A tuple of two NumPy arrays. The first element has the shape (N1, dim1), andthe second element has the shape (N2, dim2). The mini-batch is only applied to N1.

3.2. deepxde.data 35

Page 40: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

• y_train – A NumPy array of shape (N1, N2).

losses(targets, outputs, loss, model)Return a list of losses, i.e., constraints.

test()Return a test dataset.

train_next_batch(batch_size=None)Return a training dataset of the size batch_size.

3.3 deepxde.geometry

3.3.1 deepxde.geometry.csg module

class deepxde.geometry.csg.CSGDifference(geom1, geom2)Bases: deepxde.geometry.geometry.Geometry

Construct an object by CSG Difference.

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

inside(x)Check if x is inside the geometry (including the boundary).

on_boundary(x)Check if x is on the geometry boundary.

periodic_point(x, component)Compute the periodic image of x for periodic boundary condition.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

random_points(n, random=’pseudo’)Compute the random point locations in the geometry.

class deepxde.geometry.csg.CSGIntersection(geom1, geom2)Bases: deepxde.geometry.geometry.Geometry

Construct an object by CSG Intersection.

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

inside(x)Check if x is inside the geometry (including the boundary).

on_boundary(x)Check if x is on the geometry boundary.

periodic_point(x, component)Compute the periodic image of x for periodic boundary condition.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

random_points(n, random=’pseudo’)Compute the random point locations in the geometry.

36 Chapter 3. API reference

Page 41: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

class deepxde.geometry.csg.CSGUnion(geom1, geom2)Bases: deepxde.geometry.geometry.Geometry

Construct an object by CSG Union.

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

inside(x)Check if x is inside the geometry (including the boundary).

on_boundary(x)Check if x is on the geometry boundary.

periodic_point(x, component)Compute the periodic image of x for periodic boundary condition.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

random_points(n, random=’pseudo’)Compute the random point locations in the geometry.

3.3.2 deepxde.geometry.geometry module

class deepxde.geometry.geometry.Geometry(dim, bbox, diam)Bases: abc.ABC

background_points(x, dirn, dist2npt, shift)

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

difference(other)CSG Difference.

distance2boundary(x, dirn)

inside(x)Check if x is inside the geometry (including the boundary).

intersection(other)CSG Intersection.

mindist2boundary(x)

on_boundary(x)Check if x is on the geometry boundary.

periodic_point(x, component)Compute the periodic image of x for periodic boundary condition.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

random_points(n, random=’pseudo’)Compute the random point locations in the geometry.

uniform_boundary_points(n)Compute the equispaced point locations on the boundary.

uniform_points(n, boundary=True)Compute the equispaced point locations in the geometry.

3.3. deepxde.geometry 37

Page 42: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

union(other)CSG Union.

3.3.3 deepxde.geometry.geometry_1d module

class deepxde.geometry.geometry_1d.Interval(l, r)Bases: deepxde.geometry.geometry.Geometry

background_points(x, dirn, dist2npt, shift)

Parameters

• dirn – -1 (left), or 1 (right), or 0 (both direction).

• dist2npt – A function which converts distance to the number of extra points (not in-cluding x).

• shift – The number of shift.

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

distance2boundary(x, dirn)

inside(x)Check if x is inside the geometry (including the boundary).

log_uniform_points(n, boundary=True)

mindist2boundary(x)

on_boundary(x)Check if x is on the geometry boundary.

periodic_point(x, component=0)Compute the periodic image of x for periodic boundary condition.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

random_points(n, random=’pseudo’)Compute the random point locations in the geometry.

uniform_boundary_points(n)Compute the equispaced point locations on the boundary.

uniform_points(n, boundary=True)Compute the equispaced point locations in the geometry.

3.3.4 deepxde.geometry.geometry_2d module

class deepxde.geometry.geometry_2d.Disk(center, radius)Bases: deepxde.geometry.geometry.Geometry

background_points(x, dirn, dist2npt, shift)

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

distance2boundary(x, dirn)

38 Chapter 3. API reference

Page 43: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

distance2boundary_unitdirn(x, dirn)https://en.wikipedia.org/wiki/Line%E2%80%93sphere_intersection

inside(x)Check if x is inside the geometry (including the boundary).

mindist2boundary(x)

on_boundary(x)Check if x is on the geometry boundary.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

random_points(n, random=’pseudo’)http://mathworld.wolfram.com/DiskPointPicking.html

uniform_boundary_points(n)Compute the equispaced point locations on the boundary.

class deepxde.geometry.geometry_2d.Polygon(vertices)Bases: deepxde.geometry.geometry.Geometry

Simple polygon.

Parameters vertices – The order of vertices can be in a clockwise or counterclockwise direction.The vertices will be re-ordered in counterclockwise (right hand rule).

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

inside(x)Check if x is inside the geometry (including the boundary).

on_boundary(x)Check if x is on the geometry boundary.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

random_points(n, random=’pseudo’)Compute the random point locations in the geometry.

uniform_boundary_points(n)Compute the equispaced point locations on the boundary.

class deepxde.geometry.geometry_2d.Rectangle(xmin, xmax)Bases: deepxde.geometry.geometry_nd.Hypercube

Parameters

• xmin – Coordinate of bottom left corner.

• xmax – Coordinate of top right corner.

static is_valid(vertices)Check if the geometry is a Rectangle.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

uniform_boundary_points(n)Compute the equispaced point locations on the boundary.

3.3. deepxde.geometry 39

Page 44: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

class deepxde.geometry.geometry_2d.Triangle(x1, x2, x3)Bases: deepxde.geometry.geometry.Geometry

Triangle.

The order of vertices can be in a clockwise or counterclockwise direction. The vertices will be re-ordered incounterclockwise (right hand rule).

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

inside(x)See https://stackoverflow.com/a/2049593/12679294

on_boundary(x)Check if x is on the geometry boundary.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

random_points(n, random=’pseudo’)There are two methods for triangle point picking.

Method 1 (used here):

• https://math.stackexchange.com/questions/18686/uniform-random-point-in-triangle

Method 2:

• http://mathworld.wolfram.com/TrianglePointPicking.html

• https://hbfs.wordpress.com/2010/10/05/random-points-in-a-triangle-generating-random-sequences-ii/

• https://stackoverflow.com/questions/19654251/random-point-inside-triangle-inside-java

uniform_boundary_points(n)Compute the equispaced point locations on the boundary.

deepxde.geometry.geometry_2d.clockwise_rotation_90(v)Rotate a vector of 90 degrees clockwise about the origin.

deepxde.geometry.geometry_2d.is_left(P0, P1, P2)Test if a point is Left|On|Right of an infinite line. See: the January 2001 Algorithm “Area of 2D and 3D Trianglesand Polygons”.

Parameters

• P0 – One point in the line.

• P1 – One point in the line.

• P2 – A array of point to be tested.

Returns >0 if P2 left of the line through P0 and P1, =0 if P2 on the line, <0 if P2 right of the line.

deepxde.geometry.geometry_2d.is_on_line_segment(P0, P1, P2)Test if a point is on a line segment.

Parameters

• P0 – One point in the line.

• P1 – One point in the line.

• P2 – The point to be tested.

40 Chapter 3. API reference

Page 45: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

deepxde.geometry.geometry_2d.is_rectangle(vertices)Check if the geometry is a rectangle. https://stackoverflow.com/questions/2303278/find-if-4-points-on-a-plane-form-a-rectangle/2304031

1. Find the center of mass of corner points: cx=(x1+x2+x3+x4)/4, cy=(y1+y2+y3+y4)/4

2. Test if square of distances from center of mass to all 4 corners are equal

deepxde.geometry.geometry_2d.polygon_signed_area(vertices)The (signed) area of a simple polygon.

If the vertices are in the counterclockwise direction, then the area is positive; if they are in the clockwisedirection, the area is negative.

Shoelace formula: https://en.wikipedia.org/wiki/Shoelace_formula

3.3.5 deepxde.geometry.geometry_3d module

class deepxde.geometry.geometry_3d.Cuboid(xmin, xmax)Bases: deepxde.geometry.geometry_nd.Hypercube

Parameters

• xmin – Coordinate of bottom left corner.

• xmax – Coordinate of top right corner.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

uniform_boundary_points(n)Compute the equispaced point locations on the boundary.

class deepxde.geometry.geometry_3d.Sphere(center, radius)Bases: deepxde.geometry.geometry_nd.Hypersphere

Parameters

• center – Center of the sphere.

• radius – Radius of the sphere.

3.3.6 deepxde.geometry.geometry_nd module

class deepxde.geometry.geometry_nd.Hypercube(xmin, xmax)Bases: deepxde.geometry.geometry.Geometry

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

inside(x)Check if x is inside the geometry (including the boundary).

on_boundary(x)Check if x is on the geometry boundary.

periodic_point(x, component)Compute the periodic image of x for periodic boundary condition.

random_boundary_points(n, random=’pseudo’)Compute the random point locations on the boundary.

3.3. deepxde.geometry 41

Page 46: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

random_points(n, random=’pseudo’)Compute the random point locations in the geometry.

uniform_points(n, boundary=True)Compute the equispaced point locations in the geometry.

class deepxde.geometry.geometry_nd.Hypersphere(center, radius)Bases: deepxde.geometry.geometry.Geometry

background_points(x, dirn, dist2npt, shift)

boundary_normal(x)Compute the unit normal at x for Neumann or Robin boundary conditions.

distance2boundary(x, dirn)

distance2boundary_unitdirn(x, dirn)https://en.wikipedia.org/wiki/Line%E2%80%93sphere_intersection

inside(x)Check if x is inside the geometry (including the boundary).

mindist2boundary(x)

on_boundary(x)Check if x is on the geometry boundary.

random_boundary_points(n, random=’pseudo’)http://mathworld.wolfram.com/HyperspherePointPicking.html

random_points(n, random=’pseudo’)https://math.stackexchange.com/questions/87230/picking-random-points-in-the-volume-of-sphere-with-uniform-probability

3.3.7 deepxde.geometry.sampler module

deepxde.geometry.sampler.pseudo(n_samples, dimension)Pseudo random.

deepxde.geometry.sampler.quasirandom(n_samples, dimension, sampler)

deepxde.geometry.sampler.sample(n_samples, dimension, sampler=’pseudo’)Generate random or quasirandom samples in [0, 1]^dimension.

Parameters

• n_samples (int) – The number of samples.

• dimension (int) – Space dimension.

• sampler (string) – One of the following: “pseudo” (pseudorandom), “LHS” (Latinhypercube sampling), “Halton” (Halton sequence), “Hammersley” (Hammersley sequence),or “Sobol” (Sobol sequence).

3.3.8 deepxde.geometry.timedomain module

class deepxde.geometry.timedomain.GeometryXTime(geometry, timedomain)Bases: object

boundary_normal(x)

on_boundary(x)

42 Chapter 3. API reference

Page 47: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

on_initial(x)

periodic_point(x, component)

random_boundary_points(n, random=’pseudo’)

random_initial_points(n, random=’pseudo’)

random_points(n, random=’pseudo’)

uniform_boundary_points(n)Uniform boundary points on the spatio-temporal domain.

Geometry surface area ~ bbox. Time surface area ~ diam.

uniform_initial_points(n)

uniform_points(n, boundary=True)Uniform points on the spatio-temporal domain.

Geometry volume ~ bbox. Time volume ~ diam.

class deepxde.geometry.timedomain.TimeDomain(t0, t1)Bases: deepxde.geometry.geometry_1d.Interval

on_initial(t)

3.4 deepxde.icbcs

3.4.1 deepxde.icbcs.boundary_conditions module

Boundary conditions.

class deepxde.icbcs.boundary_conditions.BC(geom, on_boundary, component)Bases: abc.ABC

Boundary condition base class.

Parameters

• geom – A deepxde.geometry.Geometry instance.

• on_boundary – A function: (x, Geometry.on_boundary(x)) -> True/False.

• component – The output component satisfying this BC.

collocation_points(X)

error(X, inputs, outputs, beg, end)Returns the loss.

filter(X)

normal_derivative(X, inputs, outputs, beg, end)

class deepxde.icbcs.boundary_conditions.DirichletBC(geom, func, on_boundary, com-ponent=0)

Bases: deepxde.icbcs.boundary_conditions.BC

Dirichlet boundary conditions: y(x) = func(x).

error(X, inputs, outputs, beg, end)Returns the loss.

3.4. deepxde.icbcs 43

Page 48: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

class deepxde.icbcs.boundary_conditions.NeumannBC(geom, func, on_boundary, compo-nent=0)

Bases: deepxde.icbcs.boundary_conditions.BC

Neumann boundary conditions: dy/dn(x) = func(x).

error(X, inputs, outputs, beg, end)Returns the loss.

class deepxde.icbcs.boundary_conditions.RobinBC(geom, func, on_boundary, compo-nent=0)

Bases: deepxde.icbcs.boundary_conditions.BC

Robin boundary conditions: dy/dn(x) = func(x, y).

error(X, inputs, outputs, beg, end)Returns the loss.

class deepxde.icbcs.boundary_conditions.PeriodicBC(geom, component_x,on_boundary, deriva-tive_order=0, component=0)

Bases: deepxde.icbcs.boundary_conditions.BC

Periodic boundary conditions on component_x.

collocation_points(X)

error(X, inputs, outputs, beg, end)Returns the loss.

class deepxde.icbcs.boundary_conditions.OperatorBC(geom, func, on_boundary)Bases: deepxde.icbcs.boundary_conditions.BC

General operator boundary conditions: func(inputs, outputs, X) = 0.

Parameters

• geom – Geometry.

• func – A function takes arguments (inputs, outputs, X) and outputs a tensor of size N x 1,where N is the length of inputs. inputs and outputs are the network input and output tensors,respectively; X are the NumPy array of the inputs.

• on_boundary – (x, Geometry.on_boundary(x)) -> True/False.

error(X, inputs, outputs, beg, end)Returns the loss.

class deepxde.icbcs.boundary_conditions.PointSetBC(points, values, component=0)Bases: object

Dirichlet boundary condition for a set of points. Compare the output (that associates with points) with values(target data).

Parameters

• points – An array of points where the corresponding target values are known and usedfor training.

• values – An array of values that gives the exact solution of the problem.

• component – The output component satisfying this BC.

collocation_points(X)

error(X, inputs, outputs, beg, end)

44 Chapter 3. API reference

Page 49: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

3.4.2 deepxde.icbcs.initial_conditions module

Initial conditions.

class deepxde.icbcs.initial_conditions.IC(geom, func, on_initial, component=0)Bases: object

Initial conditions: y([x, t0]) = func([x, t0]).

collocation_points(X)

error(X, inputs, outputs, beg, end)

filter(X)

3.5 deepxde.nn

3.5.1 deepxde.nn.activations module

deepxde.nn.activations.get(identifier)Returns function.

Parameters identifier – Function or string.

Returns Function corresponding to the input string or input function.

deepxde.nn.activations.layer_wise_locally_adaptive(activation, n=1)Layer-wise locally adaptive activation functions (L-LAAF).

Examples:

To define a L-LAAF ReLU with the scaling factor n = 10:

n = 10activation = f"LAAF-{n} relu" # "LAAF-10 relu"

References: Jagtap et al., 2019.

deepxde.nn.activations.linear(x)

3.5.2 deepxde.nn.initializers module

class deepxde.nn.initializers.VarianceScalingStacked(scale=1.0, mode=’fan_in’, dis-tribution=’truncated_normal’,seed=None)

Bases: object

Initializer capable of adapting its scale to the shape of weights tensors.

With distribution=”truncated_normal” or “untruncated_normal”, samples are drawn from a trun-cated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used)stddev = sqrt(scale / n) where n is:

• number of input units in the weight tensor, if mode = “fan_in”

• number of output units, if mode = “fan_out”

• average of the numbers of input and output units, if mode = “fan_avg”

3.5. deepxde.nn 45

Page 50: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

With distribution=”uniform”, samples are drawn from a uniform distribution within [-limit, limit], with limit =sqrt(3 * scale / n).

Parameters

• scale – Scaling factor (positive float).

• mode – One of “fan_in”, “fan_out”, “fan_avg”.

• distribution – Random distribution to use. One of “normal”, “uniform”.

• seed – A Python integer. Used to create random seeds. See tf.set_random_seed for behav-ior.

• dtype – Default data type, used if no dtype argument is provided when calling the initial-izer. Only floating point types are supported.

Raises ValueError – In case of an invalid value for the “scale”, mode” or “distribution” argu-ments.

deepxde.nn.initializers.get(identifier)Retrieve an initializer by the identifier.

Parameters identifier – String that contains the initializer name or an initializer function.

Returns Initializer instance base on the input identifier.

deepxde.nn.initializers.initializer_dict_tf()

deepxde.nn.initializers.initializer_dict_torch()

3.5.3 deepxde.nn.regularizers module

deepxde.nn.regularizers.get(identifier)

3.6 deepxde.nn.tensorflow_compat_v1

3.6.1 deepxde.nn.tensorflow_compat_v1.deeponet module

class deepxde.nn.tensorflow_compat_v1.deeponet.DeepONet(layer_sizes_branch,layer_sizes_trunk, acti-vation, kernel_initializer,regularization=None,use_bias=True,stacked=False, train-able_branch=True, train-able_trunk=True)

Bases: deepxde.nn.tensorflow_compat_v1.nn.NN

Deep operator network.

Lu et al. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators.Nat Mach Intell, 2021.

Parameters

• layer_sizes_branch – A list of integers as the width of a fully connected network, or(dim, f) where dim is the input dimension and f is a network function. The width of the lastlayer in the branch and trunk net should be equal.

46 Chapter 3. API reference

Page 51: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

• layer_sizes_trunk (list) – A list of integers as the width of a fully connected net-work.

• activation – If activation is a string, then the same activation is used in both trunkand branch nets. If activation is a dict, then the trunk net uses the activation activa-tion[“trunk”], and the branch net uses activation[“branch”].

• trainable_branch – Boolean.

• trainable_trunk – Boolean or a list of booleans.

build()Construct the network.

inputsReturn the net inputs (placeholders).

outputsReturn the net outputs (tf.Tensor).

targetsReturn the targets of the net outputs (placeholders).

class deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetCartesianProd(layer_size_branch,layer_size_trunk,activa-tion,ker-nel_initializer,regu-lariza-tion=None)

Bases: deepxde.nn.tensorflow_compat_v1.nn.NN

Deep operator network for dataset in the format of Cartesian product.

Parameters

• layer_size_branch – A list of integers as the width of a fully connected network, or(dim, f) where dim is the input dimension and f is a network function. The width of the lastlayer in the branch and trunk net should be equal.

• layer_size_trunk (list) – A list of integers as the width of a fully connected net-work.

• activation – If activation is a string, then the same activation is used in both trunkand branch nets. If activation is a dict, then the trunk net uses the activation activa-tion[“trunk”], and the branch net uses activation[“branch”].

build()Construct the network.

inputsReturn the net inputs (placeholders).

outputsReturn the net outputs (tf.Tensor).

targetsReturn the targets of the net outputs (placeholders).

3.6. deepxde.nn.tensorflow_compat_v1 47

Page 52: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

class deepxde.nn.tensorflow_compat_v1.deeponet.FourierDeepONetCartesianProd(layer_size_Fourier_branch,out-put_shape,layer_size_branch,layer_size_trunk,ac-ti-va-tion,ker-nel_initializer,reg-u-lar-iza-tion=None)

Bases: deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetCartesianProd

Deep operator network with a Fourier trunk net for dataset in the format of Cartesian product.

There are two pairs of trunk and branch nets. One pair is the vanilla DeepONet, and the other one uses Fourierbasis as the trunk net. Because the dataset is in the format of Cartesian product, the Fourier branch-trunk netsare implemented via the inverse FFT.

Parameters

• layer_size_Fourier_branch – A list of integers as the width of a fully connectednetwork, or (dim, f) where dim is the input dimension and f is a network function.

• output_shape (tuple[int]) – Shape of the output.

build()Construct the network.

3.6.2 deepxde.nn.tensorflow_compat_v1.fnn module

class deepxde.nn.tensorflow_compat_v1.fnn.FNN(layer_sizes, activation, kernel_initializer,regularization=None, dropout_rate=0,batch_normalization=None,layer_normalization=None, ker-nel_constraint=None, use_bias=True)

Bases: deepxde.nn.tensorflow_compat_v1.nn.NN

Fully-connected neural network.

build()Construct the network.

inputsReturn the net inputs (placeholders).

outputsReturn the net outputs (tf.Tensor).

targetsReturn the targets of the net outputs (placeholders).

48 Chapter 3. API reference

Page 53: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

class deepxde.nn.tensorflow_compat_v1.fnn.PFNN(layer_sizes, activation, ker-nel_initializer, regulariza-tion=None, dropout_rate=0,batch_normalization=None)

Bases: deepxde.nn.tensorflow_compat_v1.fnn.FNN

Parallel fully-connected neural network that uses independent sub-networks for each network output.

Parameters layer_sizes – A nested list to define the architecture of the neural network (howthe layers are connected). If layer_sizes[i] is int, it represent one layer shared by all the outputs;if layer_sizes[i] is list, it represent len(layer_sizes[i]) sub-layers, each of which exclusivelyused by one output. Note that len(layer_sizes[i]) should equal to the number of outputs. Everynumber specify the number of neurons of that layer.

build()Construct the network.

3.6.3 deepxde.nn.tensorflow_compat_v1.mfnn module

class deepxde.nn.tensorflow_compat_v1.mfnn.MfNN(layer_sizes_low_fidelity,layer_sizes_high_fidelity, acti-vation, kernel_initializer, regu-larization=None, residue=False,trainable_low_fidelity=True, train-able_high_fidelity=True)

Bases: deepxde.nn.tensorflow_compat_v1.nn.NN

Multifidelity neural networks.

build()Construct the network.

inputsReturn the net inputs (placeholders).

outputsReturn the net outputs (tf.Tensor).

targetsReturn the targets of the net outputs (placeholders).

3.6.4 deepxde.nn.tensorflow_compat_v1.msffn module

class deepxde.nn.tensorflow_compat_v1.msffn.MsFFN(layer_sizes, activation, ker-nel_initializer, sigmas, regular-ization=None, dropout_rate=0,batch_normalization=None,layer_normalization=None,kernel_constraint=None,use_bias=True)

Bases: deepxde.nn.tensorflow_compat_v1.fnn.FNN

Multi-scale fourier feature networks.

References:

• https://arxiv.org/abs/2012.10047

• https://github.com/PredictiveIntelligenceLab/MultiscalePINNs

3.6. deepxde.nn.tensorflow_compat_v1 49

Page 54: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

Parameters sigmas – List of standard deviation of the distribution of fourier feature embeddings.

build()Construct the network.

class deepxde.nn.tensorflow_compat_v1.msffn.STMsFFN(layer_sizes, activation, ker-nel_initializer, sigmas_x,sigmas_t, regulariza-tion=None, dropout_rate=0,batch_normalization=None,layer_normalization=None,kernel_constraint=None,use_bias=True)

Bases: deepxde.nn.tensorflow_compat_v1.msffn.MsFFN

Spatio-temporal multi-scale fourier feature networks.

References:

• https://arxiv.org/abs/2012.10047

• https://github.com/PredictiveIntelligenceLab/MultiscalePINNs

build()Construct the network.

3.6.5 deepxde.nn.tensorflow_compat_v1.nn module

class deepxde.nn.tensorflow_compat_v1.nn.NNBases: object

Base class for all neural network modules.

apply_feature_transform(transform)Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then,outputs = network(features).

apply_output_transform(transform)Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).

auxiliary_varsReturn additional variables needed (placeholders).

build()Construct the network.

built

feed_dict(training, inputs, targets=None, auxiliary_vars=None)Construct a feed_dict to feed values to TensorFlow placeholders.

inputsReturn the net inputs (placeholders).

outputsReturn the net outputs (tf.Tensor).

targetsReturn the targets of the net outputs (placeholders).

50 Chapter 3. API reference

Page 55: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

3.6.6 deepxde.nn.tensorflow_compat_v1.resnet module

class deepxde.nn.tensorflow_compat_v1.resnet.ResNet(input_size, output_size,num_neurons, num_blocks,activation, kernel_initializer,regularization=None)

Bases: deepxde.nn.tensorflow_compat_v1.nn.NN

Residual neural network.

build()Construct the network.

inputsReturn the net inputs (placeholders).

outputsReturn the net outputs (tf.Tensor).

targetsReturn the targets of the net outputs (placeholders).

3.7 deepxde.nn.tensorflow

3.7.1 deepxde.nn.tensorflow.deeponet module

class deepxde.nn.tensorflow.deeponet.DeepONetCartesianProd(layer_sizes_branch,layer_sizes_trunk,activation, ker-nel_initializer)

Bases: deepxde.nn.tensorflow.nn.NN

Deep operator network for dataset in the format of Cartesian product.

Parameters

• layer_size_branch – A list of integers as the width of a fully connected network, or(dim, f) where dim is the input dimension and f is a network function. The width of the lastlayer in the branch and trunk net should be equal.

• layer_size_trunk (list) – A list of integers as the width of a fully connected net-work.

• activation – If activation is a string, then the same activation is used in both trunkand branch nets. If activation is a dict, then the trunk net uses the activation activa-tion[“trunk”], and the branch net uses activation[“branch”].

call(inputs, training=False)Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graphfrom the provided inputs).

Note: This method should not be called directly. It is only meant to be overridden when subclassingtf.keras.Model. To call a model on an input, always use the __call__ method, i.e. model(inputs), whichrelies on the underlying call method.

Parameters

• inputs – Input tensor, or dict/list/tuple of input tensors.

3.7. deepxde.nn.tensorflow 51

Page 56: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

• training – Boolean or boolean scalar tensor, indicating whether to run the Network intraining mode or inference mode.

• mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns A tensor if there is a single output, or a list of tensors if there are more than one outputs.

3.7.2 deepxde.nn.tensorflow.fnn module

class deepxde.nn.tensorflow.fnn.FNN(layer_sizes, activation, kernel_initializer, regulariza-tion=None, dropout_rate=0)

Bases: deepxde.nn.tensorflow.nn.NN

Fully-connected neural network.

call(inputs, training=False)Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graphfrom the provided inputs).

Note: This method should not be called directly. It is only meant to be overridden when subclassingtf.keras.Model. To call a model on an input, always use the __call__ method, i.e. model(inputs), whichrelies on the underlying call method.

Parameters

• inputs – Input tensor, or dict/list/tuple of input tensors.

• training – Boolean or boolean scalar tensor, indicating whether to run the Network intraining mode or inference mode.

• mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns A tensor if there is a single output, or a list of tensors if there are more than one outputs.

3.7.3 deepxde.nn.tensorflow.nn module

class deepxde.nn.tensorflow.nn.NNBases: keras.engine.training.Model

Base class for all neural network modules.

apply_feature_transform(transform)Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then,outputs = network(features).

apply_output_transform(transform)Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).

auxiliary_varsAny additional variables needed.

Type Tensors

inputsReturn the net inputs (Tensors).

52 Chapter 3. API reference

Page 57: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

3.8 deepxde.nn.pytorch

3.8.1 deepxde.nn.pytorch.fnn module

class deepxde.nn.pytorch.fnn.FNN(layer_sizes, activation, kernel_initializer)Bases: deepxde.nn.pytorch.nn.NN

Fully-connected neural network.

forward(inputs)Defines the computation performed at every call.

Should be overridden by all subclasses.

Note: Although the recipe for forward pass needs to be defined within this function, one should call theModule instance afterwards instead of this since the former takes care of running the registered hookswhile the latter silently ignores them.

3.8.2 deepxde.nn.pytorch.nn module

class deepxde.nn.pytorch.nn.NNBases: torch.nn.modules.module.Module

Base class for all neural network modules.

apply_feature_transform(transform)Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then,outputs = network(features).

apply_output_transform(transform)Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).

3.9 deepxde.optimizers

3.9.1 deepxde.optimizers.config module

deepxde.optimizers.config.set_LBFGS_options(maxcor=100, ftol=0, gtol=1e-08, max-iter=15000, maxfun=None, maxls=50)

Sets the hyperparameters of L-BFGS.

The L-BFGS optimizer used in each backend:

• TensorFlow 1.x: scipy.optimize.minimize

• TensorFlow 2.x: tfp.optimizer.lbfgs_minimize

• PyTorch: torch.optim.LBFGS

I find empirically that torch.optim.LBFGS and scipy.optimize.minimize are better thantfp.optimizer.lbfgs_minimize in terms of the final loss value.

Parameters

3.8. deepxde.nn.pytorch 53

Page 58: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

• maxcor (int) – maxcor (scipy), num_correction_pairs (tfp), history_size (torch). Themaximum number of variable metric corrections used to define the limited memory matrix.(The limited memory BFGS method does not store the full hessian but uses this many termsin an approximation to it.)

• ftol (float) – ftol (scipy), f_relative_tolerance (tfp), tolerance_change (torch). Theiteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol.

• gtol (float) – gtol (scipy), tolerance (tfp), tolerance_grad (torch). The iteration willstop when max{|proj g_i | i = 1, . . . , n} <= gtol where pg_i is the i-th component of theprojected gradient.

• maxiter (int) – maxiter (scipy), max_iterations (tfp), max_iter (torch). Maximum num-ber of iterations.

• maxfun (int) – maxfun (scipy), max_eval (torch). Maximum number of function evalua-tions. If None, maxiter * 1.25.

• maxls (int) – maxls (scipy), max_line_search_iterations (tfp). Maximum number of linesearch steps (per iteration).

Warning: If L-BFGS stops earlier than expected, set the default float type to ‘float64’:

dde.config.set_default_float("float64")

3.10 deepxde.utils

3.10.1 deepxde.utils.external module

External utilities.

class deepxde.utils.external.PointSet(points)Bases: object

A set of points.

Parameters points – A NumPy array of shape (N, dx). A list of dx-dim points.

inside(x)Returns True if x is in this set of points, otherwise, returns False.

Parameters x – A NumPy array. A single point, or a list of points.

Returns

If x is a single point, returns True or False. If x is a list of points, returns a list ofTrue or False.

values_to_func(values, default_value=0)Convert the pairs of points and values to a callable function.

Parameters

• values – A NumPy array of shape (N, dy). values[i] is the dy-dim function value of thei-th point in this point set.

• default_value (float) – The function value of the points not in this point set.

Returns

54 Chapter 3. API reference

Page 59: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

A callable function. The input of this function should be a NumPy array of shape (?,dx).

deepxde.utils.external.apply(func, args=None, kwds=None)Launch a new process to call the function.

This can be used to clear Tensorflow GPU memory after model execution: https://stackoverflow.com/questions/39758094/clearing-tensorflow-gpu-memory-after-model-execution

deepxde.utils.external.standardize(X_train, X_test)Standardize features by removing the mean and scaling to unit variance.

The mean and std are computed from the training data X_train using sklearn.preprocessing.StandardScaler, andthen applied to the testing data X_test.

Parameters

• X_train – A NumPy array of shape (n_samples, n_features). The data used to computethe mean and standard deviation used for later scaling along the features axis.

• X_test – A NumPy array.

Returns Instance of sklearn.preprocessing.StandardScaler. X_train: Transformedtraining data. X_test: Transformed testing data.

Return type scaler

deepxde.utils.external.uniformly_continuous_delta(X, Y, eps)Compute the supremum of delta in uniformly continuous.

Parameters X – N x d, equispaced points.

3.10. deepxde.utils 55

Page 60: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

56 Chapter 3. API reference

Page 61: DeepXDE Documentation

CHAPTER 4

Indices and tables

• genindex

• modindex

• search

57

Page 62: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

58 Chapter 4. Indices and tables

Page 63: DeepXDE Documentation

Python Module Index

ddeepxde.callbacks, 19deepxde.config, 23deepxde.data.constraint, 27deepxde.data.data, 27deepxde.data.dataset, 28deepxde.data.fpde, 28deepxde.data.func_constraint, 30deepxde.data.function, 31deepxde.data.helper, 31deepxde.data.ide, 31deepxde.data.mf, 32deepxde.data.pde, 33deepxde.data.sampler, 34deepxde.data.triple, 35deepxde.geometry.csg, 36deepxde.geometry.geometry, 37deepxde.geometry.geometry_1d, 38deepxde.geometry.geometry_2d, 38deepxde.geometry.geometry_3d, 41deepxde.geometry.geometry_nd, 41deepxde.geometry.sampler, 42deepxde.geometry.timedomain, 42deepxde.gradients, 23deepxde.icbcs.boundary_conditions, 43deepxde.icbcs.initial_conditions, 45deepxde.losses, 24deepxde.metrics, 24deepxde.model, 24deepxde.nn.activations, 45deepxde.nn.initializers, 45deepxde.nn.pytorch.fnn, 53deepxde.nn.pytorch.nn, 53deepxde.nn.regularizers, 46deepxde.nn.tensorflow.deeponet, 51deepxde.nn.tensorflow.fnn, 52deepxde.nn.tensorflow.nn, 52deepxde.nn.tensorflow_compat_v1.deeponet,

46

deepxde.nn.tensorflow_compat_v1.fnn, 48deepxde.nn.tensorflow_compat_v1.mfnn,

49deepxde.nn.tensorflow_compat_v1.msffn,

49deepxde.nn.tensorflow_compat_v1.nn, 50deepxde.nn.tensorflow_compat_v1.resnet,

51deepxde.optimizers.config, 53deepxde.postprocessing, 26deepxde.real, 27deepxde.utils.external, 54

59

Page 64: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

60 Python Module Index

Page 65: DeepXDE Documentation

Index

Aabsolute_percentage_error_std() (in mod-

ule deepxde.metrics), 24accuracy() (in module deepxde.metrics), 24add_anchors() (deepxde.data.pde.PDE method), 34append() (deepxde.callbacks.CallbackList method), 20append() (deepxde.model.LossHistory method), 26apply() (in module deepxde.utils.external), 55apply_feature_transform() (deep-

xde.nn.pytorch.nn.NN method), 53apply_feature_transform() (deep-

xde.nn.tensorflow.nn.NN method), 52apply_feature_transform() (deep-

xde.nn.tensorflow_compat_v1.nn.NN method),50

apply_output_transform() (deep-xde.nn.pytorch.nn.NN method), 53

apply_output_transform() (deep-xde.nn.tensorflow.nn.NN method), 52

apply_output_transform() (deep-xde.nn.tensorflow_compat_v1.nn.NN method),50

auxiliary_vars (deepxde.nn.tensorflow.nn.NN at-tribute), 52

auxiliary_vars (deep-xde.nn.tensorflow_compat_v1.nn.NN attribute),50

Bbackground_points() (deep-

xde.geometry.geometry.Geometry method),37

background_points() (deep-xde.geometry.geometry_1d.Interval method),38

background_points() (deep-xde.geometry.geometry_2d.Disk method),38

background_points() (deep-

xde.geometry.geometry_nd.Hyperspheremethod), 42

BatchSampler (class in deepxde.data.sampler), 34BC (class in deepxde.icbcs.boundary_conditions), 43bc_points() (deepxde.data.pde.PDE method), 34boundary_normal() (deep-

xde.geometry.csg.CSGDifference method),36

boundary_normal() (deep-xde.geometry.csg.CSGIntersection method),36

boundary_normal() (deep-xde.geometry.csg.CSGUnion method), 37

boundary_normal() (deep-xde.geometry.geometry.Geometry method),37

boundary_normal() (deep-xde.geometry.geometry_1d.Interval method),38

boundary_normal() (deep-xde.geometry.geometry_2d.Disk method),38

boundary_normal() (deep-xde.geometry.geometry_2d.Polygon method),39

boundary_normal() (deep-xde.geometry.geometry_2d.Triangle method),40

boundary_normal() (deep-xde.geometry.geometry_nd.Hypercubemethod), 41

boundary_normal() (deep-xde.geometry.geometry_nd.Hyperspheremethod), 42

boundary_normal() (deep-xde.geometry.timedomain.GeometryXTimemethod), 42

build() (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetmethod), 47

build() (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetCartesianProd

61

Page 66: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

method), 47build() (deepxde.nn.tensorflow_compat_v1.deeponet.FourierDeepONetCartesianProd

method), 48build() (deepxde.nn.tensorflow_compat_v1.fnn.FNN

method), 48build() (deepxde.nn.tensorflow_compat_v1.fnn.PFNN

method), 49build() (deepxde.nn.tensorflow_compat_v1.mfnn.MfNN

method), 49build() (deepxde.nn.tensorflow_compat_v1.msffn.MsFFN

method), 50build() (deepxde.nn.tensorflow_compat_v1.msffn.STMsFFN

method), 50build() (deepxde.nn.tensorflow_compat_v1.nn.NN

method), 50build() (deepxde.nn.tensorflow_compat_v1.resnet.ResNet

method), 51built (deepxde.nn.tensorflow_compat_v1.nn.NN

attribute), 50

Ccall() (deepxde.nn.tensorflow.deeponet.DeepONetCartesianProd

method), 51call() (deepxde.nn.tensorflow.fnn.FNN method), 52Callback (class in deepxde.callbacks), 19CallbackList (class in deepxde.callbacks), 20clear() (in module deepxde.gradients), 23clockwise_rotation_90() (in module deep-

xde.geometry.geometry_2d), 40collocation_points() (deep-

xde.icbcs.boundary_conditions.BC method),43

collocation_points() (deep-xde.icbcs.boundary_conditions.PeriodicBCmethod), 44

collocation_points() (deep-xde.icbcs.boundary_conditions.PointSetBCmethod), 44

collocation_points() (deep-xde.icbcs.initial_conditions.IC method),45

compile() (deepxde.model.Model method), 24Constraint (class in deepxde.data.constraint), 27CSGDifference (class in deepxde.geometry.csg), 36CSGIntersection (class in deepxde.geometry.csg),

36CSGUnion (class in deepxde.geometry.csg), 36Cuboid (class in deepxde.geometry.geometry_3d), 41

DData (class in deepxde.data.data), 27DataSet (class in deepxde.data.dataset), 28DeepONet (class in deep-

xde.nn.tensorflow_compat_v1.deeponet),

46DeepONetCartesianProd (class in deep-

xde.nn.tensorflow.deeponet), 51DeepONetCartesianProd (class in deep-

xde.nn.tensorflow_compat_v1.deeponet),47

deepxde.callbacks (module), 19deepxde.config (module), 23deepxde.data.constraint (module), 27deepxde.data.data (module), 27deepxde.data.dataset (module), 28deepxde.data.fpde (module), 28deepxde.data.func_constraint (module), 30deepxde.data.function (module), 31deepxde.data.helper (module), 31deepxde.data.ide (module), 31deepxde.data.mf (module), 32deepxde.data.pde (module), 33deepxde.data.sampler (module), 34deepxde.data.triple (module), 35deepxde.geometry.csg (module), 36deepxde.geometry.geometry (module), 37deepxde.geometry.geometry_1d (module), 38deepxde.geometry.geometry_2d (module), 38deepxde.geometry.geometry_3d (module), 41deepxde.geometry.geometry_nd (module), 41deepxde.geometry.sampler (module), 42deepxde.geometry.timedomain (module), 42deepxde.gradients (module), 23deepxde.icbcs.boundary_conditions (mod-

ule), 43deepxde.icbcs.initial_conditions (mod-

ule), 45deepxde.losses (module), 24deepxde.metrics (module), 24deepxde.model (module), 24deepxde.nn.activations (module), 45deepxde.nn.initializers (module), 45deepxde.nn.pytorch.fnn (module), 53deepxde.nn.pytorch.nn (module), 53deepxde.nn.regularizers (module), 46deepxde.nn.tensorflow.deeponet (module),

51deepxde.nn.tensorflow.fnn (module), 52deepxde.nn.tensorflow.nn (module), 52deepxde.nn.tensorflow_compat_v1.deeponet

(module), 46deepxde.nn.tensorflow_compat_v1.fnn

(module), 48deepxde.nn.tensorflow_compat_v1.mfnn

(module), 49deepxde.nn.tensorflow_compat_v1.msffn

(module), 49

62 Index

Page 67: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

deepxde.nn.tensorflow_compat_v1.nn (mod-ule), 50

deepxde.nn.tensorflow_compat_v1.resnet(module), 51

deepxde.optimizers.config (module), 53deepxde.postprocessing (module), 26deepxde.real (module), 27deepxde.utils.external (module), 54default_float() (in module deepxde.config), 23difference() (deep-

xde.geometry.geometry.Geometry method),37

DirichletBC (class in deep-xde.icbcs.boundary_conditions), 43

Disk (class in deepxde.geometry.geometry_2d), 38disregard_best() (deepxde.model.TrainState

method), 26distance2boundary() (deep-

xde.geometry.geometry.Geometry method),37

distance2boundary() (deep-xde.geometry.geometry_1d.Interval method),38

distance2boundary() (deep-xde.geometry.geometry_2d.Disk method),38

distance2boundary() (deep-xde.geometry.geometry_nd.Hyperspheremethod), 42

distance2boundary_unitdirn() (deep-xde.geometry.geometry_2d.Disk method),38

distance2boundary_unitdirn() (deep-xde.geometry.geometry_nd.Hyperspheremethod), 42

DropoutUncertainty (class in deepxde.callbacks),20

dynamic_dist2npts() (deep-xde.data.fpde.Fractional method), 29

EEarlyStopping (class in deepxde.callbacks), 20epochs_completed (deep-

xde.data.sampler.BatchSampler attribute),35

error() (deepxde.icbcs.boundary_conditions.BCmethod), 43

error() (deepxde.icbcs.boundary_conditions.DirichletBCmethod), 43

error() (deepxde.icbcs.boundary_conditions.NeumannBCmethod), 44

error() (deepxde.icbcs.boundary_conditions.OperatorBCmethod), 44

error() (deepxde.icbcs.boundary_conditions.PeriodicBCmethod), 44

error() (deepxde.icbcs.boundary_conditions.PointSetBCmethod), 44

error() (deepxde.icbcs.boundary_conditions.RobinBCmethod), 44

error() (deepxde.icbcs.initial_conditions.IC method),45

Ffeed_dict() (deep-

xde.nn.tensorflow_compat_v1.nn.NN method),50

filter() (deepxde.icbcs.boundary_conditions.BCmethod), 43

filter() (deepxde.icbcs.initial_conditions.ICmethod), 45

FirstDerivative (class in deepxde.callbacks), 21FNN (class in deepxde.nn.pytorch.fnn), 53FNN (class in deepxde.nn.tensorflow.fnn), 52FNN (class in deepxde.nn.tensorflow_compat_v1.fnn), 48forward() (deepxde.nn.pytorch.fnn.FNN method), 53FourierDeepONetCartesianProd (class in deep-

xde.nn.tensorflow_compat_v1.deeponet), 47FPDE (class in deepxde.data.fpde), 28Fractional (class in deepxde.data.fpde), 29FractionalTime (class in deepxde.data.fpde), 29FuncConstraint (class in deep-

xde.data.func_constraint), 30Function (class in deepxde.data.function), 31

GGeometry (class in deepxde.geometry.geometry), 37GeometryXTime (class in deep-

xde.geometry.timedomain), 42get() (in module deepxde.losses), 24get() (in module deepxde.metrics), 24get() (in module deepxde.nn.activations), 45get() (in module deepxde.nn.initializers), 46get() (in module deepxde.nn.regularizers), 46get_int_matrix() (deepxde.data.fpde.FPDE

method), 29get_int_matrix() (deepxde.data.fpde.TimeFPDE

method), 30get_int_matrix() (deepxde.data.ide.IDE method),

32get_matrix() (deepxde.data.fpde.Fractional

method), 29get_matrix() (deepxde.data.fpde.FractionalTime

method), 29get_matrix_dynamic() (deep-

xde.data.fpde.Fractional method), 29get_matrix_dynamic() (deep-

xde.data.fpde.FractionalTime method), 30

Index 63

Page 68: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

get_matrix_static() (deep-xde.data.fpde.Fractional method), 29

get_matrix_static() (deep-xde.data.fpde.FractionalTime method), 30

get_monitor_value() (deep-xde.callbacks.EarlyStopping method), 21

get_next() (deepxde.data.sampler.BatchSamplermethod), 35

get_value() (deepxde.callbacks.OperatorPredictormethod), 22

get_value() (deepxde.callbacks.VariableValuemethod), 22

get_weight() (deepxde.data.fpde.Fractionalmethod), 29

get_x() (deepxde.data.fpde.Fractional method), 29get_x() (deepxde.data.fpde.FractionalTime method),

30get_x_dynamic() (deepxde.data.fpde.Fractional

method), 29get_x_dynamic() (deep-

xde.data.fpde.FractionalTime method), 30get_x_static() (deepxde.data.fpde.Fractional

method), 29get_x_static() (deepxde.data.fpde.FractionalTime

method), 30

Hhessian() (in module deepxde.gradients), 23Hypercube (class in deepxde.geometry.geometry_nd),

41Hypersphere (class in deep-

xde.geometry.geometry_nd), 42

IIC (class in deepxde.icbcs.initial_conditions), 45IDE (class in deepxde.data.ide), 31init() (deepxde.callbacks.Callback method), 19init() (deepxde.callbacks.MovieDumper method), 21init() (deepxde.callbacks.OperatorPredictor method),

22initializer_dict_tf() (in module deep-

xde.nn.initializers), 46initializer_dict_torch() (in module deep-

xde.nn.initializers), 46inputs (deepxde.nn.tensorflow.nn.NN attribute), 52inputs (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONet

attribute), 47inputs (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetCartesianProd

attribute), 47inputs (deepxde.nn.tensorflow_compat_v1.fnn.FNN at-

tribute), 48inputs (deepxde.nn.tensorflow_compat_v1.mfnn.MfNN

attribute), 49

inputs (deepxde.nn.tensorflow_compat_v1.nn.NN at-tribute), 50

inputs (deepxde.nn.tensorflow_compat_v1.resnet.ResNetattribute), 51

inside() (deepxde.geometry.csg.CSGDifferencemethod), 36

inside() (deepxde.geometry.csg.CSGIntersectionmethod), 36

inside() (deepxde.geometry.csg.CSGUnion method),37

inside() (deepxde.geometry.geometry.Geometrymethod), 37

inside() (deepxde.geometry.geometry_1d.Intervalmethod), 38

inside() (deepxde.geometry.geometry_2d.Diskmethod), 39

inside() (deepxde.geometry.geometry_2d.Polygonmethod), 39

inside() (deepxde.geometry.geometry_2d.Trianglemethod), 40

inside() (deepxde.geometry.geometry_nd.Hypercubemethod), 41

inside() (deepxde.geometry.geometry_nd.Hyperspheremethod), 42

inside() (deepxde.utils.external.PointSet method), 54intersection() (deep-

xde.geometry.geometry.Geometry method),37

Interval (class in deepxde.geometry.geometry_1d), 38is_left() (in module deep-

xde.geometry.geometry_2d), 40is_on_line_segment() (in module deep-

xde.geometry.geometry_2d), 40is_rectangle() (in module deep-

xde.geometry.geometry_2d), 40is_valid() (deepxde.geometry.geometry_2d.Rectangle

static method), 39

Jjacobian() (in module deepxde.gradients), 23

Ll2_relative_error() (in module deep-

xde.metrics), 24layer_wise_locally_adaptive() (in module

deepxde.nn.activations), 45linear() (in module deepxde.nn.activations), 45log_uniform_points() (deep-

xde.geometry.geometry_1d.Interval method),38

losses() (deepxde.data.constraint.Constraintmethod), 27

losses() (deepxde.data.data.Data method), 27losses() (deepxde.data.data.Tuple method), 28

64 Index

Page 69: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

losses() (deepxde.data.dataset.DataSet method), 28losses() (deepxde.data.fpde.FPDE method), 29losses() (deepxde.data.func_constraint.FuncConstraint

method), 30losses() (deepxde.data.function.Function method), 31losses() (deepxde.data.ide.IDE method), 32losses() (deepxde.data.mf.MfDataSet method), 32losses() (deepxde.data.mf.MfFunc method), 32losses() (deepxde.data.pde.PDE method), 34losses() (deepxde.data.triple.Triple method), 35losses() (deepxde.data.triple.TripleCartesianProd

method), 36LossHistory (class in deepxde.model), 26

Mmax_absolute_percentage_error() (in mod-

ule deepxde.metrics), 24mean_absolute_error() (in module deep-

xde.losses), 24mean_absolute_percentage_error() (in mod-

ule deepxde.losses), 24mean_absolute_percentage_error() (in mod-

ule deepxde.metrics), 24mean_l2_relative_error() (in module deep-

xde.metrics), 24mean_squared_error() (in module deep-

xde.losses), 24mean_squared_error() (in module deep-

xde.metrics), 24MfDataSet (class in deepxde.data.mf ), 32MfFunc (class in deepxde.data.mf ), 32MfNN (class in deepxde.nn.tensorflow_compat_v1.mfnn),

49mindist2boundary() (deep-

xde.geometry.geometry.Geometry method),37

mindist2boundary() (deep-xde.geometry.geometry_1d.Interval method),38

mindist2boundary() (deep-xde.geometry.geometry_2d.Disk method),39

mindist2boundary() (deep-xde.geometry.geometry_nd.Hyperspheremethod), 42

Model (class in deepxde.model), 24model (deepxde.callbacks.Callback attribute), 19ModelCheckpoint (class in deepxde.callbacks), 21modify_first_order() (deep-

xde.data.fpde.Fractional method), 29modify_second_order() (deep-

xde.data.fpde.Fractional method), 29modify_third_order() (deep-

xde.data.fpde.Fractional method), 29

MovieDumper (class in deepxde.callbacks), 21MsFFN (class in deep-

xde.nn.tensorflow_compat_v1.msffn), 49

Nnanl2_relative_error() (in module deep-

xde.metrics), 24NeumannBC (class in deep-

xde.icbcs.boundary_conditions), 43NN (class in deepxde.nn.pytorch.nn), 53NN (class in deepxde.nn.tensorflow.nn), 52NN (class in deepxde.nn.tensorflow_compat_v1.nn), 50normal_derivative() (deep-

xde.icbcs.boundary_conditions.BC method),43

num_bcs (deepxde.data.pde.PDE attribute), 34nx (deepxde.data.fpde.FractionalTime attribute), 29

Oon_batch_begin() (deepxde.callbacks.Callback

method), 19on_batch_begin() (deepxde.callbacks.CallbackList

method), 20on_batch_end() (deepxde.callbacks.Callback

method), 19on_batch_end() (deepxde.callbacks.CallbackList

method), 20on_boundary() (deep-

xde.geometry.csg.CSGDifference method),36

on_boundary() (deep-xde.geometry.csg.CSGIntersection method),36

on_boundary() (deepxde.geometry.csg.CSGUnionmethod), 37

on_boundary() (deep-xde.geometry.geometry.Geometry method),37

on_boundary() (deep-xde.geometry.geometry_1d.Interval method),38

on_boundary() (deep-xde.geometry.geometry_2d.Disk method),39

on_boundary() (deep-xde.geometry.geometry_2d.Polygon method),39

on_boundary() (deep-xde.geometry.geometry_2d.Triangle method),40

on_boundary() (deep-xde.geometry.geometry_nd.Hypercubemethod), 41

Index 65

Page 70: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

on_boundary() (deep-xde.geometry.geometry_nd.Hyperspheremethod), 42

on_boundary() (deep-xde.geometry.timedomain.GeometryXTimemethod), 42

on_epoch_begin() (deepxde.callbacks.Callbackmethod), 19

on_epoch_begin() (deepxde.callbacks.CallbackListmethod), 20

on_epoch_end() (deepxde.callbacks.Callbackmethod), 19

on_epoch_end() (deepxde.callbacks.CallbackListmethod), 20

on_epoch_end() (deep-xde.callbacks.DropoutUncertainty method),20

on_epoch_end() (deepxde.callbacks.EarlyStoppingmethod), 21

on_epoch_end() (deep-xde.callbacks.ModelCheckpoint method),21

on_epoch_end() (deepxde.callbacks.MovieDumpermethod), 21

on_epoch_end() (deep-xde.callbacks.PDEResidualResamplermethod), 22

on_epoch_end() (deepxde.callbacks.Timer method),22

on_epoch_end() (deepxde.callbacks.VariableValuemethod), 22

on_initial() (deep-xde.geometry.timedomain.GeometryXTimemethod), 42

on_initial() (deep-xde.geometry.timedomain.TimeDomainmethod), 43

on_predict_begin() (deepxde.callbacks.Callbackmethod), 19

on_predict_begin() (deep-xde.callbacks.CallbackList method), 20

on_predict_end() (deepxde.callbacks.Callbackmethod), 19

on_predict_end() (deepxde.callbacks.CallbackListmethod), 20

on_predict_end() (deep-xde.callbacks.OperatorPredictor method),22

on_train_begin() (deepxde.callbacks.Callbackmethod), 19

on_train_begin() (deepxde.callbacks.CallbackListmethod), 20

on_train_begin() (deep-xde.callbacks.EarlyStopping method), 21

on_train_begin() (deep-xde.callbacks.MovieDumper method), 21

on_train_begin() (deep-xde.callbacks.PDEResidualResamplermethod), 22

on_train_begin() (deepxde.callbacks.Timermethod), 22

on_train_begin() (deep-xde.callbacks.VariableValue method), 23

on_train_end() (deepxde.callbacks.Callbackmethod), 20

on_train_end() (deepxde.callbacks.CallbackListmethod), 20

on_train_end() (deep-xde.callbacks.DropoutUncertainty method),20

on_train_end() (deepxde.callbacks.EarlyStoppingmethod), 21

on_train_end() (deepxde.callbacks.MovieDumpermethod), 22

one_function() (in module deepxde.data.helper), 31OperatorBC (class in deep-

xde.icbcs.boundary_conditions), 44OperatorPredictor (class in deepxde.callbacks), 22outputs (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONet

attribute), 47outputs (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetCartesianProd

attribute), 47outputs (deepxde.nn.tensorflow_compat_v1.fnn.FNN

attribute), 48outputs (deepxde.nn.tensorflow_compat_v1.mfnn.MfNN

attribute), 49outputs (deepxde.nn.tensorflow_compat_v1.nn.NN at-

tribute), 50outputs (deepxde.nn.tensorflow_compat_v1.resnet.ResNet

attribute), 51

Ppacked_data() (deepxde.model.TrainState method),

26PDE (class in deepxde.data.pde), 33PDEResidualResampler (class in deep-

xde.callbacks), 22periodic_point() (deep-

xde.geometry.csg.CSGDifference method),36

periodic_point() (deep-xde.geometry.csg.CSGIntersection method),36

periodic_point() (deep-xde.geometry.csg.CSGUnion method), 37

periodic_point() (deep-xde.geometry.geometry.Geometry method),37

66 Index

Page 71: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

periodic_point() (deep-xde.geometry.geometry_1d.Interval method),38

periodic_point() (deep-xde.geometry.geometry_nd.Hypercubemethod), 41

periodic_point() (deep-xde.geometry.timedomain.GeometryXTimemethod), 43

PeriodicBC (class in deep-xde.icbcs.boundary_conditions), 44

PFNN (class in deepxde.nn.tensorflow_compat_v1.fnn),48

plot_best_state() (in module deep-xde.postprocessing), 26

plot_loss_history() (in module deep-xde.postprocessing), 26

PointSet (class in deepxde.utils.external), 54PointSetBC (class in deep-

xde.icbcs.boundary_conditions), 44Polygon (class in deepxde.geometry.geometry_2d), 39polygon_signed_area() (in module deep-

xde.geometry.geometry_2d), 41predict() (deepxde.model.Model method), 25print_model() (deepxde.model.Model method), 25pseudo() (in module deepxde.geometry.sampler), 42

Qquad_points() (deepxde.data.ide.IDE method), 32quasirandom() (in module deep-

xde.geometry.sampler), 42

Rrandom_boundary_points() (deep-

xde.geometry.csg.CSGDifference method),36

random_boundary_points() (deep-xde.geometry.csg.CSGIntersection method),36

random_boundary_points() (deep-xde.geometry.csg.CSGUnion method), 37

random_boundary_points() (deep-xde.geometry.geometry.Geometry method),37

random_boundary_points() (deep-xde.geometry.geometry_1d.Interval method),38

random_boundary_points() (deep-xde.geometry.geometry_2d.Disk method),39

random_boundary_points() (deep-xde.geometry.geometry_2d.Polygon method),39

random_boundary_points() (deep-xde.geometry.geometry_2d.Rectangle method),39

random_boundary_points() (deep-xde.geometry.geometry_2d.Triangle method),40

random_boundary_points() (deep-xde.geometry.geometry_3d.Cuboid method),41

random_boundary_points() (deep-xde.geometry.geometry_nd.Hypercubemethod), 41

random_boundary_points() (deep-xde.geometry.geometry_nd.Hyperspheremethod), 42

random_boundary_points() (deep-xde.geometry.timedomain.GeometryXTimemethod), 43

random_initial_points() (deep-xde.geometry.timedomain.GeometryXTimemethod), 43

random_points() (deep-xde.geometry.csg.CSGDifference method),36

random_points() (deep-xde.geometry.csg.CSGIntersection method),36

random_points() (deepxde.geometry.csg.CSGUnionmethod), 37

random_points() (deep-xde.geometry.geometry.Geometry method),37

random_points() (deep-xde.geometry.geometry_1d.Interval method),38

random_points() (deep-xde.geometry.geometry_2d.Disk method),39

random_points() (deep-xde.geometry.geometry_2d.Polygon method),39

random_points() (deep-xde.geometry.geometry_2d.Triangle method),40

random_points() (deep-xde.geometry.geometry_nd.Hypercubemethod), 41

random_points() (deep-xde.geometry.geometry_nd.Hyperspheremethod), 42

random_points() (deep-xde.geometry.timedomain.GeometryXTimemethod), 43

Real (class in deepxde.real), 27

Index 67

Page 72: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

Rectangle (class in deepxde.geometry.geometry_2d),39

resample_train_points() (deep-xde.data.pde.PDE method), 34

ResNet (class in deep-xde.nn.tensorflow_compat_v1.resnet), 51

restore() (deepxde.model.Model method), 25RobinBC (class in deepxde.icbcs.boundary_conditions),

44

Ssample() (in module deepxde.geometry.sampler), 42save() (deepxde.model.Model method), 25save_best_state() (in module deep-

xde.postprocessing), 27save_loss_history() (in module deep-

xde.postprocessing), 27saveplot() (in module deepxde.postprocessing), 27Scheme (class in deepxde.data.fpde), 30set_data_test() (deepxde.model.TrainState

method), 26set_data_train() (deepxde.model.TrainState

method), 26set_default_float() (in module deepxde.config),

23set_float32() (deepxde.real.Real method), 27set_float64() (deepxde.real.Real method), 27set_LBFGS_options() (in module deep-

xde.optimizers.config), 53set_loss_weights() (deepxde.model.LossHistory

method), 26set_model() (deepxde.callbacks.Callback method),

20set_model() (deepxde.callbacks.CallbackList

method), 20softmax_cross_entropy() (in module deep-

xde.losses), 24Sphere (class in deepxde.geometry.geometry_3d), 41standardize() (in module deepxde.utils.external), 55state_dict() (deepxde.model.Model method), 25STMsFFN (class in deep-

xde.nn.tensorflow_compat_v1.msffn), 50

Ttargets (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONet

attribute), 47targets (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetCartesianProd

attribute), 47targets (deepxde.nn.tensorflow_compat_v1.fnn.FNN

attribute), 48targets (deepxde.nn.tensorflow_compat_v1.mfnn.MfNN

attribute), 49targets (deepxde.nn.tensorflow_compat_v1.nn.NN at-

tribute), 50

targets (deepxde.nn.tensorflow_compat_v1.resnet.ResNetattribute), 51

test() (deepxde.data.constraint.Constraint method),27

test() (deepxde.data.data.Data method), 28test() (deepxde.data.data.Tuple method), 28test() (deepxde.data.dataset.DataSet method), 28test() (deepxde.data.fpde.FPDE method), 29test() (deepxde.data.fpde.TimeFPDE method), 30test() (deepxde.data.func_constraint.FuncConstraint

method), 31test() (deepxde.data.function.Function method), 31test() (deepxde.data.ide.IDE method), 32test() (deepxde.data.mf.MfDataSet method), 32test() (deepxde.data.mf.MfFunc method), 32test() (deepxde.data.pde.PDE method), 34test() (deepxde.data.triple.Triple method), 35test() (deepxde.data.triple.TripleCartesianProd

method), 36test_aux_vars (deepxde.data.pde.PDE attribute), 34test_points() (deepxde.data.fpde.FPDE method),

29test_points() (deepxde.data.ide.IDE method), 32test_points() (deepxde.data.pde.PDE method), 34test_x (deepxde.data.pde.PDE attribute), 34TimeDomain (class in deepxde.geometry.timedomain),

43TimeFPDE (class in deepxde.data.fpde), 30TimePDE (class in deepxde.data.pde), 34Timer (class in deepxde.callbacks), 22train() (deepxde.model.Model method), 25train_aux_vars (deepxde.data.pde.PDE attribute),

34train_next_batch() (deep-

xde.data.constraint.Constraint method),27

train_next_batch() (deepxde.data.data.Datamethod), 28

train_next_batch() (deepxde.data.data.Tuplemethod), 28

train_next_batch() (deep-xde.data.dataset.DataSet method), 28

train_next_batch() (deepxde.data.fpde.FPDEmethod), 29

train_next_batch() (deep-xde.data.fpde.TimeFPDE method), 30

train_next_batch() (deep-xde.data.func_constraint.FuncConstraintmethod), 31

train_next_batch() (deep-xde.data.function.Function method), 31

train_next_batch() (deepxde.data.ide.IDEmethod), 32

train_next_batch() (deepxde.data.mf.MfDataSet

68 Index

Page 73: DeepXDE Documentation

DeepXDE Documentation, Release 0.13.6

method), 32train_next_batch() (deepxde.data.mf.MfFunc

method), 32train_next_batch() (deepxde.data.pde.PDE

method), 34train_next_batch() (deepxde.data.triple.Triple

method), 35train_next_batch() (deep-

xde.data.triple.TripleCartesianProd method),36

train_points() (deepxde.data.fpde.TimeFPDEmethod), 30

train_points() (deepxde.data.pde.PDE method),34

train_points() (deepxde.data.pde.TimePDEmethod), 34

train_x (deepxde.data.pde.PDE attribute), 33train_x_all (deepxde.data.pde.PDE attribute), 33train_x_bc (deepxde.data.pde.PDE attribute), 33TrainState (class in deepxde.model), 26transform_inputs() (deep-

xde.data.dataset.DataSet method), 28Triangle (class in deepxde.geometry.geometry_2d), 39Triple (class in deepxde.data.triple), 35TripleCartesianProd (class in deep-

xde.data.triple), 35Tuple (class in deepxde.data.data), 28

Uuniform_boundary_points() (deep-

xde.geometry.geometry.Geometry method),37

uniform_boundary_points() (deep-xde.geometry.geometry_1d.Interval method),38

uniform_boundary_points() (deep-xde.geometry.geometry_2d.Disk method),39

uniform_boundary_points() (deep-xde.geometry.geometry_2d.Polygon method),39

uniform_boundary_points() (deep-xde.geometry.geometry_2d.Rectangle method),39

uniform_boundary_points() (deep-xde.geometry.geometry_2d.Triangle method),40

uniform_boundary_points() (deep-xde.geometry.geometry_3d.Cuboid method),41

uniform_boundary_points() (deep-xde.geometry.timedomain.GeometryXTimemethod), 43

uniform_initial_points() (deep-xde.geometry.timedomain.GeometryXTimemethod), 43

uniform_points() (deep-xde.geometry.geometry.Geometry method),37

uniform_points() (deep-xde.geometry.geometry_1d.Interval method),38

uniform_points() (deep-xde.geometry.geometry_nd.Hypercubemethod), 42

uniform_points() (deep-xde.geometry.timedomain.GeometryXTimemethod), 43

uniformly_continuous_delta() (in moduledeepxde.utils.external), 55

union() (deepxde.geometry.geometry.Geometrymethod), 37

update_best() (deepxde.model.TrainState method),26

Vvalues_to_func() (deepxde.utils.external.PointSet

method), 54VariableValue (class in deepxde.callbacks), 22VarianceScalingStacked (class in deep-

xde.nn.initializers), 45

Zzero() (in module deepxde.losses), 24zero_function() (in module deepxde.data.helper),

31

Index 69