From: rusin@washington.math.niu.edu (Dave Rusin)
Newsgroups: sci.math
Subject: Re: orthogonal functions?
Date: 23 Jan 1996 18:39:11 GMT
In article ,
Blattner Peter wrote:
>I'm looking for a set of functions (for example polynomes...)
>
> P' (x)
> n
>which are orthoglonal (or better : are a base of a vector space in L2)
>over a certain interval (for exzample ]-1,-1[) and whose first derivates at
>the boundary of the interval is zero:
Well, you can easily make functions which have derivatives of zero at both
ends: if f is any polynomial, then f(x) * ((x-1)*(x+1))^2 and its
derivative both vanish at both x=1 and x=-1. The set of such polynomials
forms a subspace (of codimension 4) in the span of all polynomials. A basis
comes from taking f(x)=x^n for n=0, 1, 2, ...
Now if you need an orthonormal family, you can always use the Gram-Schmidt
process on the basis described above; indeed, this is how the Legendre
polynomials are defined (Gram-Schmidt applied to { x^n, n=0, 1, ...} ).
I don't know if this family of functions has been studied or whether it
has any particular attributes similar to the Legendre polynomials, but
it probably does.
Here are the first few terms. Each is of the form (3/2^k)*sqrt(A)*B*(x^2-1)^2:
n= k= A= B= roots of B:
0 4 5*7 1 -
1 4 5*7*11 x 0
2 5 7*13 11x^2-1 +-.30151
3 5 5*7*11 (13x^2-3)x 0, +-.48038
4 7 7*11*17 65x^4-26x^2+1 +-.20762, +-.59741
5 7 5*7*11*13*19 (17x^4-10x^2+1)x 0, +-.67860, +-.35741
There are some number-theoretic patterns I haven't tried to clarify. Note
also the alternation of the roots of successive polynomials. I'm
guessing that those who really use Legendre polynomials will see
how to state and prove some appropriate results here.
Observe that the functions I described above have an extra feature you did
not request: that p(1)=p(-1)=0. If you want functions without these
extra conditions, you can enlarge your basis by adding the polynomials
A(x)=(1/4)(x+2)(x-1)^2 and B(x) = A(-x), whose derivatives vanish at
the endpoints, but which also have
A(1)=0 A(-1)=1
B(1)=1 B(-1)=0
With these functions too, you are describing (for differentiable
functions) precisely the codimension-2 subspace of L^2 described in
your post.
dave
==============================================================================
Newsgroups: sci.math
Subject: Was : orthogonal functions?
From: blattner@imt.unine.ch (Blattner Peter)
Date: Thu, 01 Feb 1996 17:33:57 +0100
Some days ago I asked the following question:
Blattner Peter wrote:
>Hi,
>
>I'm doing some function optimization with the help of legendre polynomials:
> ---
> f(x) = \ a P (x) -1 < x < 1 (to optimize : a )
> / n n n
> ---
>New in my optimization problem is that the slope of the function f(x) on
>the border should be zero. (Which is obviously not the case for legendre
>polynomes)
>
>I'm looking for a set of functions (for example polynomials...)
>
> P' (x)
> n
>which are orthogonal (or better : are a base of a vector space in L2)
>over a certain interval (for example [-1,1]) and whose first derivatives at
>the boundaries of the interval is zero:
>
> dP' (x) |
> n |
> ----- | = 0
> dx |
> x=-1,1
>
>Any ideas?
>
>Are there some functions with the same proprieties mentionned above but with
>all the derivates equal to zero at the boundary?
I received the following answers (Thank you for your contributions):
----------------------------------------------------
From: Dave Dodson
P_n(x) will be of the form (1+x)^2 (1-x)^2 Q_(n-4)(x), where Q_(n-4)
is a polynomial of degree n-4, n = 4, 5, 6, ... .
The orthogonality condition is
+1
integral P_m(x) P_n(x) dx = 0 if m != n.
-1
or equivalently,
+1
integral (1+x)^2 (1-x)^2 Q_m(x) (1+x)^2 (1-x)^2 Q_n(x) dx = 0 if m != n.
-1
or
+1
integral (1+x)^4 (1-x)^4 Q_m(x) Q_n(x) dx = 0 if m != n.
-1
The Jacobi Polynomials satisfy this orthogonality condition, so that
defines the Qs. I suggest you look up the recurrence relationship in,
e.g., Abramowitz and Stegun, Handbook of Mathematical Functions,
National Bureau of Standards. My copy is dated 1965.
----------------------------------------------------
From: rusin@washington.math.niu.edu (Dave Rusin)
Well, you can easily make functions which have derivatives of zero at both
ends: if f is any polynomial, then f(x) * ((x-1)*(x+1))^2 and its
derivative both vanish at both x=1 and x=-1. The set of such polynomials
forms a subspace (of codimension 4) in the span of all polynomials. A basis
comes from taking f(x)=x^n for n=0, 1, 2, ...
Now if you need an orthonormal family, you can always use the Gram-Schmidt
process on the basis described above; indeed, this is how the Legendre
polynomials are defined (Gram-Schmidt applied to { x^n, n=0, 1, ...} ).
I don't know if this family of functions has been studied or whether it
has any particular attributes similar to the Legendre polynomials, but
it probably does.
Here are the first few terms. Each is of the form (3/2^k)*sqrt(A)*B*(x^2-1)^2:
n= k= A= B= roots of B:
0 4 5*7 1 -
1 4 5*7*11 x 0
2 5 7*13 11x^2-1 +-.30151
3 5 5*7*11 (13x^2-3)x 0, +-.48038
4 7 7*11*17 65x^4-26x^2+1 +-.20762, +-.59741
5 7 5*7*11*13*19 (17x^4-10x^2+1)x 0, +-.67860, +-.35741
There are some number-theoretic patterns I haven't tried to clarify. Note
also the alternation of the roots of successive polynomials. I'm
guessing that those who really use Legendre polynomials will see
how to state and prove some appropriate results here.
Observe that the functions I described above have an extra feature you did
not request: that p(1)=p(-1)=0. If you want functions without these
extra conditions, you can enlarge your basis by adding the polynomials
A(x)=(1/4)(x+2)(x-1)^2 and B(x) = A(-x), whose derivatives vanish at
the endpoints, but which also have
A(1)=0 A(-1)=1
B(1)=1 B(-1)=0
With these functions too, you are describing (for differentiable
functions) precisely the codimension-2 subspace of L^2 described in
your post.
----------------------------------------------------
from: elements@ix.netcom.com(William L. Anderson)
You want to approximate a function f(x) whose derivative is zero on a
boundary by orthogonal functions. A weighted Hermite series offers
attractive possibilities. It best approximates a localized "salient"
(bump) function that, along with all its derivatives, vanishes
sufficiently far away. That satisfies your requirement, but may be too
restrictive. This method is not applicable if second or higher
derivatives are nonzero on the boundary. The case of nonzero f(x) on
the boundary, can be handled by symmetry and even functions.
An arbitrary (piecewise continuous) function f(x) can be approximated by
a Hermite series weighted by an exponent function
f(x) \approx exp(-x^2) \sum_n c_n H_n(x) (A)
where H_n are Hermite polynomials and c_n are (generalized) Fourier
coefficients. For large values of x, the exponent factor dominates,
forcing the value to approach zero. This is also true of its first
derivative
f(x) \approx exp(-x^2) \sum_n c_n H_{n+1}(x) (B)
and all higher derivatives. Derivative values at the boundary are not
exactly zero, but by scaling x, can be made arbitrarily close to zero.
Fourier coefficients are calculated with
c_n = 1/{2^h h! \sqrt{\pi}} \int_{-\infty}^\infty f(x) H_n(x) dx (C)
In evaluation this integral, the limits of integration are restricted to
the range of interest. Usually, approximation (A) converges quickly for
"salient" type functions f(x). Convergence is slow near f(x)
discontinuities.
Note: The coefficient in (C) is the same as expanding F(x) =
exp(x^2)f(x) in an ordinary Hermite series. By dividing by the exponent
factor, the above equations describe a WEIGHTED Hermite series.
The integral of (A) has the interesting property that it depends only on
the first term, involving c_0. In other words, successive
approximations change the shape but not the area under the curve. This
suggests the pleasing analogy of a fixed amount of fluid that gradually
conforms to the desired shape.
"Salient" (bump) function can take forms other than weighted Hermite
series. Salients can be appended to form hierarchical tree-like
assemblages. The construction method can be governed by recursive rules
on size, shape, and directions relative to normal vector or branch
angle.
Moreover, (A) generalizes to salients (bumps) in higher dimensions.
In this space, you can do such things a compute geodesics over and
around bumps.
In summary, this method applies to a function f(x) that, along with all
its derivatives, approaches zero on the boundary. The approximating
function is a series of orthogonal weighted Hermite polynomials. You
can compute Fourier coefficients with an integral. The powerful theory
of orthogonal functions (e.g. Dirichlet condition, mean convergence,
etc.) can be applied. You can approximate experimental data, where f(x)
is a collection of connected line segments joining data points. If the
function f(x) is not zero on the boundary, then you can expand in only
even terms and view f(x) as either the left or right side of a symmetric
bump. Finally, if second or higher derivatives are nonzero on the
boundary, then this method should not be used.
----------------------------------------------------
from: smoore@bbn.com
I think a set of functions Cosine(k*Pi*x), where k = 1,3,5,...
meets your requirements.
----------------------------------------------------
From: pablo@news.impa.br (Pablo Emanuel)
What about A={sin (n*pi*x)|n in N}.It's an orthogonal set in L2(]-1,1[).
Of course
it doesn't span L2 because for all f in the subspace ,
df/dx| = 0.
|x=-1,1
The set {sin (n*pi*x),cos(n*pi*x)|n in N}U{1/2} span L2,and it's orthogonal,too.
To see more about it,look for any book on Fourier transform
----------------------------------------------------
With a reply from: Benjamin.J.Tilly@dartmouth.edu (Benjamin J. Tilly)
To see a LOT more of them take a look at any book on wavelets. These
bases have, for many practical applications, a number of advantages
over the Fourier series...
These usually will not be polynomials but the popular examples (such as
various Daubechies wavelets) are easily calculated on a computer.
Furthermore by construction they can be made to have various nice
properties, including being well-localized in space, approximating
smooth functions well, being easily calculated from sampled data, etc.
----------------------------------------------------------