An general Algebraic type.
An general Algebraic type. Can be used represent real numbers and approximate them on-demand.
A typeclass approach for getting an approximation to an A
using error B
in type C
.
FixedPoint is a value class that provides fixed point arithmetic operations (using an implicit denominator) to unboxed Long values.
FixedPoint is a value class that provides fixed point arithmetic operations (using an implicit denominator) to unboxed Long values.
Working with FixedPoint values is similar to other fractional types, except that most operations require an implicit FixedScale instance (which provides the denominator).
For example:
// interpret FixedPoint(n) as n/1000 implicit val scale = FixedScale(1000)
// these three values are equivalent val a = FixedPoint("12.345") // decimal repr val b = FixedPoint(Rational(2469, 200)) // fraction repr val c = new FixedPoint(12345L) // "raw" repr
Value class which encodes two floating point values in a Long.
Value class which encodes two floating point values in a Long.
We get (basically) unboxed complex numbers using this hack. The underlying implementation lives in the FastComplex object.
A Floating-point Filter [1] provides a Numeric
type that wraps another
Numeric
type, but defers its computation, instead providing a floating
point (Double
) approximation.
A Floating-point Filter [1] provides a Numeric
type that wraps another
Numeric
type, but defers its computation, instead providing a floating
point (Double
) approximation. For some operations, like signum
,
comparisons, equality checks, toFloat, etc, the Double
approximation may
be used to compute the result, rather than having to compute the exact value.
An FpFilter
can generally be used with any Ring numeric type (also
supports EuclideanRing, Field, and NRoot). However, it should be
kept in mind that FpFilter
knows nothing about the type its wrapping and
assumes that, generally, it is more accurate than it is. When an FpFilter
cannot determine an answer to some predicate exactly, it will defer to the
wrapped value, so it probably doesn't make sense to wrap Int
s, when an
Int
will overflow before a Double
!
Good candidates to wrap in FpFilter
are BigInts, Rationals,
BigDecimals, and Algebraic. Note that while Algebraic has an
internal floating-point filter, this still provides benefits. Namely, the
operator-fusion and allocation removal provided by the macros can make for
much faster hot paths.
Note: Both equals and hashCode will generally force the exact computation.
They should be avoided (prefer ===
for equals)... otherwise why use
bother?
[1] Burnikel, Funke, Seel. Exact Geometric Computation Using Cascading. SoCG 1998.
Interval represents a set of values, usually numbers.
Interval represents a set of values, usually numbers.
Intervals have upper and lower bounds. Each bound can be one of three kinds:
* Closed: The boundary value is included in the interval. * Open: The boundary value is excluded from the interval. * Unbound: There is no boundary value.
When the underlying type of the interval supports it, intervals may be used in arithmetic. There are several possible interpretations of interval arithmetic: the interval can represent uncertainty about a single value (for instance, a quantity +/- tolerance in engineering) or it can represent all values in the interval simultaneously. In this implementation we have chosen to use the probabillistic interpretation.
One common pitfall with interval arithmetic is that many familiar algebraic relations do not hold. For instance, given two intervals a and b:
a == b does not imply a * a == a * b
Consider a = b = [-1, 1]. Since any number times itself is non-negative, a * a = [0, 1]. However, a * b = [-1, 1], since we may actually have a=1 and b=-1.
These situations will result in loss of precision (in the form of wider intervals). The result is not wrong per se, but less acccurate than it could be.
Used to implicitly define the dimensionality of the Jet space.
Used to implicitly define the dimensionality of the Jet space.
the number of dimensions.
TODO 3.
TODO 3. LiteralOps? Literal conversions? 4. Review operator symbols? 5. Support for more operators? 6. Start to worry about things like e.g. pow(BigInt, BigInt)
Provides a type to do safe long arithmetic.
Provides a type to do safe long arithmetic. This type will never overflow, but rather convert the underlying long to a BigInt as need and back down to a Long when possible.
Given a function for finding approximate medians, this will create an exact median finder.
Interface for a sorting strategy object.
Implementation of three-valued logic.
Implementation of three-valued logic.
This type resembles Boolean, but has three values instead of two:
Trilean supports the same operations that Boolean does, and as long as all values are True or False, the results will be the same. However, the truth tables have to be extended to work with unknown:
not: -+- T|F U|U F|T
and: |T U F -+----- T|T U F U|U U F F|F F F
or: |T U F -+----- T|T T T U|T U U F|T U F
Trilean is implemented as a value type, so in most cases it will only have the overhead of a single Int. However, in some situations it will be boxed.
FastComplex is an ugly, beautiful hack.
FastComplex is an ugly, beautiful hack.
The basic idea is to encode two 32-bit Floats into a single 64-bit Long. The lower-32 bits are the "real" Float and the upper-32 are the "imaginary" Float.
Since we're overloading the meaning of Long, all the operations have to be defined on the FastComplex object, meaning the syntax for using this is a bit ugly. To add to the ugly beauty of the whole thing I could imagine defining implicit operators on Long like +@, -@, *@, /@, etc.
You might wonder why it's even worth doing this. The answer is that when you need to allocate an array of e.g. 10-20 million complex numbers, the GC overhead of using *any* object is HUGE. Since we can't build our own "pass-by-value" types on the JVM we are stuck doing an encoding like this.
Here are some profiling numbers for summing an array of complex numbers, timed against a concrete case class implementation using Float (in ms):
size | encoded | class 1M | 5.1 | 5.8 5M | 28.5 | 91.7 10M | 67.7 | 828.1 20M | 228.0 | 2687.0
Not bad, eh?
Simple implementation of insertion sort.
Simple implementation of insertion sort.
Works for small arrays but due to O(n^2) complexity is not generally good.
A simple implementation of N-dimensional dual numbers, for automatically computing exact derivatives of functions.
A simple implementation of N-dimensional dual numbers, for automatically computing exact derivatives of functions. This code (and documentation) closely follow the one in Google's "Ceres" library of non-linear least-squares solvers (see Sameer Agarwal, Keir Mierle, and others: Ceres Solver.)
While a complete treatment of the mechanics of automatic differentiation is beyond the scope of this header (see http://en.wikipedia.org/wiki/Automatic_differentiation for details), the basic idea is to extend normal arithmetic with an extra element "h" such that h != 0, but h2 = 0. Dual numbers are extensions of the real numbers analogous to complex numbers: whereas complex numbers augment the reals by introducing an imaginary unit i such that i2 = -1, dual numbers introduce an "infinitesimal" unit h such that h2 = 0. Analogously to a complex number c = x + y*i, a dual number d = x * y*h has two components: the "real" component x, and an "infinitesimal" component y. Surprisingly, this leads to a convenient method for computing exact derivatives without needing to manipulate complicated symbolic expressions.
For example, consider the function
f(x) = x * x ,
evaluated at 10. Using normal arithmetic, f(10) = 100, and df/dx(10) = 20. Next, augment 10 with an infinitesimal h to get:
f(10 + h) = (10 + h) * (10 + h) = 100 + 2 * 10 * h + h * h = 100 + 20 * h +--- +----- | | +--- This is zero | +----------------- This is df/dx
Note that the derivative of f with respect to x is simply the infinitesimal component of the value of f(x + h). So, in order to take the derivative of any function, it is only necessary to replace the numeric "object" used in the function with one extended with infinitesimals. The class Jet, defined in this header, is one such example of this, where substitution is done with generics.
To handle derivatives of functions taking multiple arguments, different infinitesimals are used, one for each variable to take the derivative of. For example, consider a scalar function of two scalar parameters x and y:
f(x, y) = x * x + x * y
Following the technique above, to compute the derivatives df/dx and df/dy for f(1, 3) involves doing two evaluations of f, the first time replacing x with x + h, the second time replacing y with y + h.
For df/dx:
f(1 + h, y) = (1 + h) * (1 + h) + (1 + h) * 3 = 1 + 2 * h + 3 + 3 * h = 4 + 5 * h Therefore df/dx = 5
For df/dy:
f(1, 3 + h) = 1 * 1 + 1 * (3 + h) = 1 + 3 + h = 4 + h Therefore df/dy = 1
To take the gradient of f with the implementation of dual numbers ("jets") in this file, it is necessary to create a single jet type which has components for the derivative in x and y, and pass them to a routine computing function f. It is convenient to use a generic version of f, that can be called also with non-jet numbers for standard evaluation:
def f[@specialized(Double) T : Field](x: T, y: T): T = x * x + x * y val xValue = 9.47892774 val yValue = 0.287740 // The "2" means there should be 2 dual number components. implicit val dimension = JetDim(2) val x: Jet[Double] = xValue + Jet.h[Double](0); // Pick the 0th dual number for x. val y: Jet[Double] = yValue + Jet.h[Double](1); // Pick the 1th dual number for y. val z: Jet[Double] = f(x, y); println("df/dx = " + z.infinitesimal(0) + ", df/dy = " + z.infinitesimal(1));
For the more mathematically inclined, this file implements first-order "jets". A 1st order jet is an element of the ring
T[N] = T[t_1, ..., t_N] / (t_1, ..., t_N)^2
which essentially means that each jet consists of a "scalar" value 'a' from T and a 1st order perturbation vector 'v' of length N:
x = a + \sum_i v[i] t_i
A shorthand is to write an element as x = a + u, where u is the perturbation. Then, the main point about the arithmetic of jets is that the product of perturbations is zero:
(a + u) * (b + v) = ab + av + bu + uv
= ab + (av + bu) + 0
which is what operator* implements below. Addition is simpler:
(a + u) + (b + v) = (a + b) + (u + v).
The only remaining question is how to evaluate the function of a jet, for which we use the chain rule:
f(a + u) = f(a) + f'(a) u
where f'(a) is the (scalar) derivative of f at a.
By pushing these things through generics, we can write routines that at same time evaluate mathematical functions and compute their derivatives through automatic differentiation.
In-place merge sort implementation.
In-place merge sort implementation. This sort is stable but does mutate the given array. It is an in-place sort but it does allocate a temporary array of the same size as the input. It uses InsertionSort for sorting very small arrays.
Convenient apply and implicits for Numbers
Polynomial A univariate polynomial class and EuclideanRing extension trait for arithmetic operations.
Polynomial A univariate polynomial class and EuclideanRing extension trait for arithmetic operations. Polynomials can be instantiated using any type C for which a Ring[C] and Signed[C] are in scope, with exponents given by Int values. Some operations require a Field[C] to be in scope.
In-place quicksort implementation.
In-place quicksort implementation. It is not stable, but does not allocate extra space (other than stack). Like MergeSort, it uses InsertionSort for sorting very small arrays.
Object providing in-place sorting capability for arrays.
Object providing in-place sorting capability for arrays.
Sorting.sort() uses quickSort() by default (in-place, not stable, generally fastest but might hit bad cases where it's O(n^2)). Also provides mergeSort() (in-place, stable, uses extra memory, still pretty fast) and insertionSort(), which is slow except for small arrays.
abs
ceil
choose (binomial coefficient)
e
exp
factorial
fibonacci
floor
gcd
lcm
log
max
min
pi
Exponentiation function, e.g.
Exponentiation function, e.g. x^y
If base^ex doesn't fit in a Long, the result will overflow (unlike Math.pow which will return +/- Infinity).
pow
Basic tools for prime factorization.
Basic tools for prime factorization.
This package is intended to provide tools for factoring numbers, checking primality, generating prime numbers, etc. For now, its main contributions are a method for factoring integers (spire.math.prime.factor) and a type for representing prime factors and their exponents (spire.math.prime.Factors).
The factorization currently happens via an implementation of Pollard-Rho with Brent's optimization. This technique works very well for composites with small prime factors (up to 10 decimal digits or so) and can support semiprimes (products of two similarly-sized primes) of 20-40 digits.
The implementation does cheat, using BigInteger.isProbablePrime(40) to test basic primality. This has a roughly 1-in-1,000,000,000,000 chance of being wrong.
Since Pollard-Rho uses random primes, its performance is somewhat non-deterministic. On this machine, factoring 20-digit semiprimes seem to average about 1.5s and factoring 30-digit semiprimes seem to average about 20s. Much larger numbers can be factored provided they are either prime or composites with smallish factors.
round
signum
sqrt