arb/todo.txt

114 lines
4.5 KiB
Text
Raw Normal View History

2013-05-28 17:41:05 +02:00
* Verify error bounds used in the fixed-point exponential code
* Consider changing the interface of functions such as X_set_Y, X_neg_Y
to always take a precision parameter (and get rid of X_set_round_Y,
X_neg_Y etc.). Perhaps have X_setexact_Y methods for convenience,
or make an exception for _set_ in particular.
2012-09-06 15:50:57 +02:00
* Make sure that excessive shifts in add/sub are detected
with exact precision. Write tests for correctness of overlaps/contains
2013-02-28 11:29:13 +01:00
in huge-exponent cases.
2012-09-06 15:50:57 +02:00
2013-05-28 17:41:05 +02:00
* Double-check correctness of add/sub code with large shifts (rounding x+eps).
* Work out semantics for comparisons/overlap/containment checks
when NaNs are involved, and write test code.
2012-09-06 15:50:57 +02:00
* Fix missing/wrong error bounds currently used in the code (see TODO/XXX).
* Add missing polynomial functionality (conversions, arithmetic, etc.)
2013-05-11 15:31:21 +01:00
* Make mullow and power series methods always truncate the inputs to length n.
2013-02-28 11:29:13 +01:00
* More transcendental functions.
2012-09-06 15:50:57 +02:00
* Add adjustment code for balls (when the mantissa is much more precise than
2013-02-28 11:29:13 +01:00
the error bound, it can be truncated). Also, try to work out more consistent
2012-09-06 15:50:57 +02:00
semantics for ball arithmetic (with regard to extra working precision, etc.)
* Do a low-level rewrite of the fmpr type.
The mantissa should probably be changed to an unsigned, top-aligned fraction
(i.e. the exponent will point to the top rather than the bottom, and
the top bit of the ).
This requires a separate sign field, increasing the struct size from
2 to 3 words, but ought to lead to simpler code and slightly less overhead.
The unsigned fraction can be stored directly in a ulong when it has
most 64 bits. A zero top bit can be used to tag the field as a pointer.
The pointer could either be to an mpz struct or directly to a limb array
where the first two limbs encode the allocation and used size.
There should probably be a recycling mechanism as for fmpz.
Required work:
memory allocation code
conversions to/from various integer types
rounding/normalization
addition
subtraction
comparison
multiplication
fix any code accessing the exponent and mantissa directly as integers
Lower priority:
low-level division, square root (these are not as critical for
performance -- it is ok to do them by converting to integers and back)
direct low-level code for addmul, mul_ui etc
* Native string conversion code instead of relying on mpfr (so we can have
big exponents, etc.).
* Add functions for sloppy arithmetic (non-exact rounding). This could be
used to speed up some ball operations with inexact output, where we don't
need the best possible result, just a correct error bound.
* Write functions that ignore the possibility that exponents might be
large, and use where appropriate (e.g. polynomial and matrix multiplication
where one bounds magnitudes in an initial pass).
2013-02-28 11:29:13 +01:00
* Write a faster logarithmic rising factorial (with correct branch
cuts) for reducing the complex log gamma function. Also implement
the logarithmic reflection formula.
* Rewrite fmprb_div (similar to fmprb_mul)
* Faster elementary functions at low precision (especially log/arctan).
2013-07-30 15:44:40 +02:00
Use Brent's algorithm (http://maths-people.anu.edu.au/~brent/pd/RNC7t4.pdf):
atan(x) = atan(p/q) + atan((q*x-p)/(q+p*x))
2013-02-28 11:29:13 +01:00
* Document fmpz_extras
* Use the complex Newton iteration for cos(pi p/q) when appropriate.
Double check the proof of correctness of the complex Newton iteration
and make it work when the polynomial is not exact.
2013-08-03 15:09:20 +02:00
* For small cos(pi p/q) and sin(pi p/q) use a lookup table of the
1/q values and then do complex binary exponentiation.
2012-09-06 15:50:57 +02:00
2013-05-31 17:37:59 +02:00
* Investigate using Chebyshev polynomials for elefun_cos_minpoly.
This is certainly faster when n is prime, but might be faster for all n,
at least if implemented cleverly.
* Add polynomial mulmid, and use in Newton iteration
2013-07-17 22:42:56 +02:00
* Tune basecase/Newton selection for exp/sin/cos series (the basecase
algorithms are more stable, and faster for quite large n)
* Look at using the exponential to compute the complex sine/cosine series
2013-08-03 15:09:20 +02:00
* Use binary splitting to speed up the tail evaluation of zeta when
computing a large number of derivatives; also check if
skipping even terms in the power sum helps.
* Tune zeta algorithm selection.
2013-07-30 15:44:40 +02:00
* Extend Stirling series code to compute polygamma functions (i.e. starting
the series from some derivative), and optimize for a small number of
derivatives by using a direct recurrence instead of binary splitting.
* Fall back to the real code when evaluating gamma functions (or their
power series) at points that happen to be real