Talk:Sturm-Liouville theory: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Paul Wormer
imported>Peter Schmitt
 
(24 intermediate revisions by 2 users not shown)
Line 191: Line 191:


:--[[User:Paul Wormer|Paul Wormer]] 07:44, 17 October 2009 (UTC)
:--[[User:Paul Wormer|Paul Wormer]] 07:44, 17 October 2009 (UTC)
:: You are right, Paul. Mentioning diagonal matrices was wrong (I was thinking of ''cE''). But I still don't see your problem. Applying <math>(\Lambda / w) </math> on ''u'' means applying <math>(\Lambda)</math> and then dividing the resulting fuction (pointwise) by ''w''. Since ''w'' is real this remains the same if you use the adjoint operator. You divide '''after''' the (adjoint) <math>\Lambda </math> is applied on ''v''. It need not and is not meant to commute as an operator. (In this sense it is like a scalar, a scalar function that is cancelled by the weight of the inner product.) (Moreover, I trust the book - link above - by my colleague who is a specialist in differential equations.) [[User:Peter Schmitt|Peter Schmitt]] 09:54, 17 October 2009 (UTC)
:::Let me make it simpler, consider D &equiv; d/dx. Then for arbitrary differentiable function ''f''(''x'')
::::<math>
D [1/w(x) f(x)] = (1/w)' f + (1/w) f' \,
</math>
:::while
::::<math>
1/w D[f(x)] = (1/w) f'  \,
</math>
:::The two expressions are not equal, unless ''w'' is independent of ''x''.
:::--[[User:Paul Wormer|Paul Wormer]] 10:10, 17 October 2009 (UTC)
:::: Sure, but that is not what I am talking of. I am talking about the operator
::::: <math> (D_1f)(x) := \frac {(Df)(x)}{w(x)} </math>
:::: and the claim is that its adjoint is
::::: <math> (D_1^\ast f)(x) := \frac {(D^\ast f)(x)}{w(x)} </math>
::::  since this holds without the weight function (the adjoint being just complex conjugation). [[User:Peter Schmitt|Peter Schmitt]] 14:00, 17 October 2009 (UTC)
[unindent]
I edited [[Adjoint (operator theory)]] showing that [''D 1/w(x)'']<sup>&lowast;</sup> = ''1/w(x)''<sup>&lowast;</sup>''D''<sup>&lowast;</sup>.  This is true for any set of operators and ''1/w''(''x'') is nothing but a ''multiplicative operator''.  Since ''w''(''x'') is real ''1/w(x)''<sup>&lowast;</sup> = ''1/w''(''x''). This does not mean that the position of ''1/w'' is irrelevant. Your notation
:<math>
\frac{D}{w}
</math>
is allowed '''if and only if D and w commute'''.
--[[User:Paul Wormer|Paul Wormer]] 14:23, 17 October 2009 (UTC)
: But 1/''w'' is not used as an operator here, moreover, there are two different inner products involved. Explicitly:
:We consider a Hilbert space ''H'' of complex functions with an inner product
:: <math> \left\langle f,g \right\rangle := \int \overline{f(x)} g(x) dx </math>
: Let ''L'' be a selfadjoint operator on ''H''
::<math> \left\langle Lf,g \right\rangle= \left\langle f,Lg \right\rangle </math>
:or explicitly
::<math> \int \overline{(Lf)(x)} g(x) dx = \int \overline{f(x)} (Lg)(x) dx </math>
: Now we consider a Hilbert space <math> H_1 </math> of (the same) complex functions with '''another''' inner product
:: <math> \left\langle f,g \right\rangle_1 := \int \overline{f(x)} g(x) (w(x)dx) </math>
: depending on the real weight function ''w''.
: Define a linear operator by <math> L_1f := \frac {(Lf)}w </math>, i.e.,
::<math> (L_1f)(x) := \frac {(Lf)(x)} {w(x)} </math>
:Then
::<math> \int \overline{(L_1f)(x)}\; g(x) \; (w(x)dx) = \int \overline{\left(\frac {(Lf)(x)}{w(x)}\right)}\; g(x)\; w(x)dx
= \int \overline{(Lf)(x)}\;g(x)\; dx =  </math>
:: <math> \int \overline{f(x)}\; (Lg)(x)\; dx =
\int \overline{f(x)}\; \left(\frac {(Lg)(x)}{w(x)}\right)\; w(x)dx = \int \overline{f(x)}\; (L_1g)(x)\; (w(x)dx) </math>
: That is
::<math> \left\langle L_1f,g \right\rangle_1 = \left\langle f,L_1g \right\rangle_1 </math>
: and therefore  <math>L_1</math> is selfadjoint in the second Hilbert space. [[User:Peter Schmitt|Peter Schmitt]] 19:41, 17 October 2009 (UTC)


== Making sure credit is given where credit is due ==
== Making sure credit is given where credit is due ==
Line 197: Line 247:


The proof of the orthogonality relation was developed by a friend and former colleague of mine at LLNL, John G. Fletcher. While I converted it from MS Word to mediawiki markup, I had nothing to do with the proof's logic. I state this since readers of the history of the proof page might come to an incorrect conclusion by virtue of the fact that I am recorded as the person who added the proof to the page. [[User:Dan Nessett|Dan Nessett]] 18:02, 16 October 2009 (UTC)
The proof of the orthogonality relation was developed by a friend and former colleague of mine at LLNL, John G. Fletcher. While I converted it from MS Word to mediawiki markup, I had nothing to do with the proof's logic. I state this since readers of the history of the proof page might come to an incorrect conclusion by virtue of the fact that I am recorded as the person who added the proof to the page. [[User:Dan Nessett|Dan Nessett]] 18:02, 16 October 2009 (UTC)
== New section ==
I start a new section, the one but previous one became too long.
First remark, Peter ask yourself the question:
:<math>
\frac{L}{w} = \frac{1}{w} L ? \quad\hbox{or}\quad  \frac{L}{w} =L\frac{1}{w} ?
</math>
Second remark, I don't think you apply the turnover rule correctly. To show how I think it should be done, I will put the bra in square brackets and consider your starting integral. Divide by real ''w'' in the bra in order to redefine the volume element (the integration measure). ''Apply the usual definition by means of the turnover rule of the adjoint w.r.t. the measure'' ''w(x)dx'' (the last equality in the next line gives the usual definition of <math>\overline{L}</math> by the turnover rule in--what Peter calls--the second Hilbert space):
:<math>
\int \left[\overline{f(x)}\right] (Lg)(x)\; dx = \int \left[ \frac{1}{w(x)} \overline{f(x)} \right]  (Lg)(x)\; w(x) dx
= \int\left[ \overline{h(x)}\right]  (Lg)(x) w(x) dx =
\int \left[ \overline{L}\,\overline{h(x)}\right] g(x) w(x) dx
</math>
where,
:<math>
h(x) \equiv \frac{1}{w(x)} f(x) \Longrightarrow L(h(x)) = L\left( \frac{1}{w(x)} f(x) \right)
</math>
and
:<math>
\left[\overline{L}\,\overline{h(x)}\right] =  \left[ \overline{L}\overline{\left(\frac{1}{w}\right) f (x)}\right] 
</math>
Hence using again that ''1/w(x)'' is real
:<math>
\int\left[ \overline{f(x)}\right] (Lg)(x)\; dx = \int \left[ \overline{L} \frac{1}{w(x)} \overline{f(x)}\right] g(x)\;  w(x) dx \equiv A
</math>
On the other hand, dividing by ''w'' in the ket and again using the definition (turnover rule) of adjoint w.r.t. ''w(x)dx'' gives for the operator  ''(1/w(x)) L''  in the ket the corresponding (overlined) form in the bra:
:<math>
\int \left[\overline{f(x)}\right] (Lg)(x)\; dx = \int \left[ \overline{f(x)} \right] \frac{1}{w(x)}(Lg)(x)\; w(x) dx = \int \left[ \overline{\left(\frac{1}{w(x)} L\right)} \overline{f(x)}\right] \; g(x) w(x)dx \equiv B
</math>
Finally comparing ''A'' and ''B''
:<math>
\overline{\left(\frac{1}{w(x)} L\right)}  = \overline{L} \overline{\left( \frac{1}{w(x)} \right)}
= \overline{L} \left( \frac{1}{w(x)} \right)
</math>
All I did here is prove the very well-known rule for adjoints of operators ''P'' and ''Q''
:<math>
\overline{P\,Q} = \overline{Q}\; \overline{P}.
</math>
In the meantime I adapted the main text because this discussion convinced me that Courant and Hilbert made the correct transformation, and following their lead I edited the text.
--[[User:Paul Wormer|Paul Wormer]] 08:40, 18 October 2009 (UTC)
: Paul, you are very patient. Obviously, I have not been able to express myself clearly enough. We are talking about different things.
* You consider the composition of two operators ''L'' and ''W'' where ''W'' is defined by
:: <math> (Wf) (x) := \frac{1}{w(x)} f(x) </math>
:i.e., ''W(f)'' is the pointwise product of the two omplex functions ''f'' and 1 / ''w''.
: Of course, this composition is not commutative (unless in special cases): <math> WL \not= LW</math>.
* I consider the linear operator ''L'' and a second linear operator ''L''<sub>1</sub> defined by modifying ''L''.
: ''L''<sub>1</sub>''f'' is defined by
:: <math> (L_1f) (x) := \frac{(Lf)(x)}{w(x)} </math>
:i.e., ''L''<sub>1</sub>''f'' is the pointwise product of the two omplex functions ''Lf'' and 1 / ''w''. (This multiplication clearly is commutative.)
* In the first formula of your "second remark" the transformations are not those I have used. They should be:
:<math>
\int \left[\overline{f(x)}\right] (Lg)(x)\; dx = \int \left[ \overline{f(x)} \right] \left[\frac{(Lg)(x)}{w(x)}\right]  \; w(x) dx
= \int\left[ \overline{f(x)}\right] \; (L_1g)(x) w(x) dx =
\int \left[ \overline{L_1f\,(x)} \right] g(x)\; w(x) dx
</math>
* Regarding the "two Hilbert spaces": It is the same vector space, but with different inner product.
: <math> \mathbb H = (V,\langle\cdot\rangle) , \mathbb H_1 = (V,\langle\cdot\rangle_1) </math>
: <math> f \mapsto \frac f {\sqrt w} </math> is a Hilbert space homomorphism (it is linear and <math> \langle f,g \rangle =  \left\langle \frac f{\sqrt w} ,\frac g{\sqrt w} \right\rangle_1 </math>).
:[[User:Peter Schmitt|Peter Schmitt]] 10:01, 19 October 2009 (UTC)
::I understand fully the point (and I agree with it) about the two Hilbert spaces, see also my modification to the main text where I introduce ''y'' and ''u''. However, I maintain that the following is ''not'' correct,
:::<math>
\int\left[ \overline{f(x)}\right] \; (L_1g)(x) w(x) dx =
\int \left[ \overline{L_1f\,(x)} \right] g(x)\; w(x) dx .
</math>
:: In words you define ''L''<sub>1</sub>''g''(''x'') acting in the ket  as:  (1/''w'')''Lg''(''x''). Namely you say:  "first apply ''L'' to ''g''  then divide the resulting function pointwise by ''w''".  You probably define ''L''<sub>1</sub>''f''(''x'') acting in the bra as:  "first apply ''L'' to the complex conjugate of ''f''  then divide the resulting function pointwise  by ''w''". You can define this in words, but IMHO this is not what the equations tell you. I follow all my books on linear algebra, functional analysis, and quantum mechanics and say that the definition of adjoint is such that the order of ''L'' and ''1/w'' is reverted. I hoped to convince you by introducing the auxiliary function ''h''(''x'') in the bra, but apparently I failed. 
::Let us stop now this discussion and let's concentrate on the changes I made to the text.
::Basically I used a third (symmetric) option:
:::<math>
\frac{L}{w} = \frac{1}{\sqrt{w}} \, L \, \frac{1}{\sqrt{w}}
</math>
::As I wrote before, I know this trick from the matrix generalized eigenvalue problem and I saw it in Courant-Hilbert (loc. cit.) in the context of the operator S-L problem. Maybe you'll agree that my changes are correct. In any case, in your view the original text is correct, so probably you'll  find my changes needlessly complicated (if not wrong), I'm curious to hear your opinion.
::--[[User:Paul Wormer|Paul Wormer]] 12:05, 19 October 2009 (UTC)
(unindent)
Paul, you want to end this. But I hope you will bear me once more.
Starting from a selfadjoint operator I define a linear operator
which is also selfadjoint. To show this I'll only use explicit computation,
no bra ant ket manipulation that I could possibly get wrong :-)
Assume ''L'' (on a space of complex functions) is selfadjoint, i.e.,
* <math> (1)\ (\forall f,g):\ \langle Lf,g \rangle = \langle f,Lg \rangle \Leftrightarrow
\int \overline{(Lf)}\cdot g\;dx = \int \overline{f}\cdot (Lg)\;dx </math>
* ''L''<sub>1</sub> is defined as follows: ''L''<sub>1</sub>''f'' is the function
:: <math>(2)\ (L_1 f)(x) := \left( \frac {(Lf)(x)}{w(x)} \right) </math>
: (where ''w'' is a positive real function), i.e., ''L''<sub>1</sub>''f'' is the (commutative) product (in the function algebra) of the two functions ''Lf'' and 1/''w''.
: <math> L_1 = \frac Lw = \frac 1w (Lf) = (Lf) \frac 1w </math> and <math> L_1 </math> is linear <math>
L_1 (\alpha f + \beta g) = \frac { L(\alpha f + \beta g) }w = \frac { \alpha (Lf) + \beta (Lg) }w =
\alpha \frac{Lf}w + \beta \frac{Lg}w = \alpha (L_1f) + \beta (L_1g) </math>.
'''Claim:''' ''L''<sub>1</sub> is selfadjoint. It must be shown that
: <math>\ (3)\ (\forall f,g) \langle L_1f,g \rangle_1 = \langle f,L_1g \rangle_1 </math> where <math> \langle f,g \rangle_1 := \int \overline{f}\cdot g\;(wdx) </math>
: We abbreviate  ''F''=''Lf'', ''G''=''Lg'', ''F<sub>1</sub>''=''L<sub>1</sub>f'', ''G''<sub>1</sub>=''L''<sub>1</sub>''g''. Because of (2) it is <math> F_1 = \frac F w, G_1 = \frac G w </math>.
It is:
* <math>(4a)\ \langle L_1f,g \rangle_1 = \langle F_1,g \rangle_1
= \int \overline{F_1}\cdot g\;(wdx) = \int \overline{(\frac F w)}\cdot g \;(wdx)
= \int \overline{F}\cdot g\;\frac 1{\overline{w}}\;wdx
= \int \overline{F}\cdot g \;dx
</math>
* <math>(4b)\ \langle f,L_1g \rangle_1 = \langle f,G_1 \rangle_1
= \int \overline{f}\cdot G_1\;(wdx) = \int \overline{f}\cdot \frac Gw \;(wdx)
= \int \overline{f}\cdot G\;\frac 1w wdx
= \int \overline{f}\cdot G \;dx
</math>
* Since (1b) is <math> \int \overline{F}\cdot g\;dx = \int \overline{f}\cdot G\;dx </math> it follows that (4a)=(4b), thus (3).
[[User:Peter Schmitt|Peter Schmitt]] 19:55, 19 October 2009 (UTC)
== Your changes ==
Paul, you write:
: "The proper setting for this problem is the Hilbert space ''L''<sup>2</sup>([''a'',&nbsp;''b''],''w''(''x'')&nbsp;''dx'') with scalar product:"
But in fact you are still using the scalar product without weight when you write:
: "<math> \langle  u_i, u_j\rangle = \int_{a}^{b} \overline{u_i(x)} u_j(x) \,dx
= \int_{a}^{b} \overline{y_i(x)} y_j(x) \, w(x)\, dx, \quad u_{k}(x) \equiv w(x)^{1/2} y_k(x)
</math>"
The first integral is the scalar product, and the second is a computation (much like my transformations) after substitution.
[[User:Peter Schmitt|Peter Schmitt]] 23:02, 19 October 2009 (UTC)
:I didn't write that sentence, it comes from WP and I tried to change as little to the text as I could. As I understand you there are two Hilbert spaces: ''L''<sup>2</sup>([''a'',&nbsp;''b''],''dx'') (containing the <i>u</i>'s) and ''L''<sup>2</sup>([''a'',&nbsp;''b''],''w''(''x'')&nbsp;''dx'') containing the ''y'''s. When I say something like that would you agree?
:--[[User:Paul Wormer|Paul Wormer]] 06:46, 20 October 2009 (UTC)
:: I did not think of it at once, but the substitution is a homomorphism from the weighted space to the "weightless" one:
::: <math> \varphi (f) = \sqrt w f </math>.
:: This maps an operator ''L''<sub>1</sub> on
::: <math> u = \varphi^{-1} \circ L_1 \circ\varphi </math>
:: and preserves adjointness. If we use the operator as defined above
::: <math>
\varphi^{-1} \circ L_1 \circ\varphi = \sqrt w \left ( \frac L w \right ) (\sqrt w)^{-1} =  w^{-1/2} L w^{-1/2}
</math>
:: This shows that "your" substitution and "my" operator are essentially the same method, described differently.
:: [[User:Peter Schmitt|Peter Schmitt]] 08:11, 20 October 2009 (UTC)
::: Wiping sweat from my brow:  does this mean that we finally agree? If so, could you, wearing  your math editor cap, please adapt the main text? Another thing, I've been reading about the Arzela-Ascoli theorem, and I wonder is the statement about the compact operator and the Arzela-Ascoli theorem  (unintelligible as it is to the non-mathematician) correct?  I read about uniformly convergent sequences of functions on compact metric spaces that are equicontinuous  (as I learned yesterday from Courant and Hilbert: ''gleichgradig stetig'') according to this theorem. As a mathematical layman I don't see the connection with the resolvent and its complete set of eigenfunctions. --[[User:Paul Wormer|Paul Wormer]] 08:48, 20 October 2009 (UTC)
:::PS You write "homomorphism", why not "isomorphism"?--[[User:Paul Wormer|Paul Wormer]] 08:52, 20 October 2009 (UTC)
:::: Paul, we agree if you admit that my transformations are (also) correct :-) I think that I never claimed that you are '''wrong''' with the way you did it -- only when you didn't accept mine ;-).
:::: I avoided "isomorhism" because, depending on the domain of the functions and on the weight ''w'', the two spaces may be different vector spaces (<math> \int fg </math> and <math>\int fgw </math> may not exist for the same pairs of functions.) Of course, I could have used monomorphism instead, but this would also not be exact because the domain may be a subspace only. (Then it is an isomorphism between subspaces.) If you take continuous functions on a bounded interval and <math> 0<\epsilon \le w(x) \le C < \infty </math> then, of course, it is an isomorphism.
:::: Concerning editing the text: I am thinking about distributing the content on three pages: S-L equation (for the equation, results about the solutions, some examples; the direct orthogonality proof would fit there as a subpage), S-L operator (for the operator formulation and application of functional analysis) and S-L theory for a more general overview, including some material on the history of this field. What do you think?
:::: [[User:Peter Schmitt|Peter Schmitt]] 13:51, 20 October 2009 (UTC)
[unindent]
It is a good idea to split Sturm-Liouville, so please go ahead. When I said "isomorphism", I had in mind <math> 0<\epsilon \le w(x) \le C < \infty </math> (which I would express as "real positive-definite w", finiteness being understood by the average physicist).  I did not think of the domain of ''f'' and ''g'', quadratic integrability, etc. This is the main difference between a mathematician and a physicist: A physicist believes that all functions are infinitely differentiable and that all  series converge uniformly. A mathematician believes the opposite until he can prove that it is otherwise. --[[User:Paul Wormer|Paul Wormer]] 15:55, 20 October 2009 (UTC)
: This difference is a good basis for cooperation ... [[User:Peter Schmitt|Peter Schmitt]] 22:15, 20 October 2009 (UTC)

Latest revision as of 16:15, 20 October 2009

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
Proofs [?]
 
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
 Definition A special second order linear ordinary differential equation. [d] [e]
Checklist and Archives
 Workgroup categories Mathematics and Physics [Categories OK]
 Talk Archive none  English language variant American English

Numbered equations

Hi Daniel,

I had to transform the templates provided in the WP article for numbered equations into plain old wikimarkup. I attempted to bring these templates over from WP, but they called many other templates and when I got them all over, the combined result didn't work. I decided to use a simple bit of html that defined a span with right justification and also defined an anchor. To reference the equation you then only need to insert a mediawiki markup referencing the anchor. It wouldn't be hard to turn this all into two templates if you think that would be useful. Dan Nessett 19:28, 2 September 2009 (UTC)


Redacted comment. I see it now.

Error

It seems to me that the article contains an error. Consider the definition:

Contrary to what is implied in the article, the operator L thus defined is not self-adjoint, unless 1/w(x) commutes with the operator to its right. This is in general not the case. The proper way to transform is (L in the next equation is w(x) times L in the previous equation):

with

Since w(x) is positive-definite w(x)−½ is well-defined and real. The operator is self-adjoint.

--Paul Wormer 08:09, 14 October 2009 (UTC)

I am getting out of my field of expertise here, but let me ask some questions. First, the section defines L and then posits, " L gives rise to a self-adjoint operator." I am not sure what this means, but one way of interpreting it is L itself is not self-adjoint, but some transformation of it is. That is, L may not itself have real eigenvalues. This statement is followed by, "This can be seen formally by using integration by parts twice, where the boundary terms vanish by virtue of the boundary conditions. It then follows that the eigenvalues of a Sturm–Liouville operator are real and that eigenfunctions of L corresponding to different eigenvalues are orthogonal." This is, admittedly, vague. However, later in the section is the statement, "As a consequence of the Arzelà–Ascoli theorem this integral operator is compact and existence of a sequence of eigenvalues αn which converge to 0 and eigenfunctions which form an orthonormal basis follows from the spectral theorem for compact operators." So, on the surface, it seems the integral operator plays some role in the argument.
That said, let me note that from the perspective of someone who has very little understanding of S-L theory, this section is obscure and would benefit from a complete rewrite. Dan Nessett 17:09, 14 October 2009 (UTC)
Paul, the factor 1 / w is cancelled by w in the definition of the inner product (I think). Peter Schmitt 19:07, 14 October 2009 (UTC)

[unindent]

Let me answer first Dan and then Peter. I forget about the real function q and define the operator Λ for real p(x)

The operator acts on a space of functions u, v, for which holds (integration with weight 1)

Here it was used that u and v satisfy periodic boundary conditions (are equal on the boundaries a and b). Then on the space of functions with this boundary condition the derivative is anti-Hermitian (anti-self-adjoint):

Apply the rule valid for arbitrary operators (capitals) and functions (lower case):

where the bar indicates complex conjugate. Then clearly, because p(x) is real, we have

so that Λ is Hermitian (self-adjoint). If we divide by real w on the left we get

Unless Λ and w commute (and they don't) it is clear that the new operator is non-Hermitian.

Why do I integrate with weight 1 and not with weight w? This is because the original S-L problem has the form

Multiply by u(x) and integrate with weight 1

which is what we want. If we would introduce weight w it would appear squared on the right, which is what we don't want.

Two more comments.

I noticed the error because I'm very familiar with the generalized matrix eigenvalue problem (I sometimes felt during my working life that all I did was setting up and solving such problems.)

The matrix H is Hermitian, S is Hermitian and positive-definite (has non-zero positive eigenvalues) and ε contains the eigenvalues on its diagonal. Beginning students sometimes divide this problem by S, but that is not the way to solve this. One way is transforming by S−½. Exactly as I did before.

The second comment: I own the classic two-volume book by Courant and Hilbert (2nd German edition). In volume I, chapter V, §3.3 they discuss the S-L problem and introduce the transformation (in their notation) z(x) = v(x) √ρ [where ρ(x) ≡ w(x)]. They give the transformed S-L operator Λ without proof. It took me about two hours to show that their form is indeed equal to the form that I started this discussion with:

with

It took me so long because Courant-Hilbert use a notation in which it is very unclear how far a differential must work. I hope that I avoided this unclarity by introducing f(x). The second term of the transformed Λ is a function, no differentiations are left "free" to act to the right.

--Paul Wormer 09:20, 15 October 2009 (UTC)

Paul, I freely admit that you know much more about partial differential equations and differential operators than I do. This is one of the topics I tried to avoid most of the time. And I believe that your transformation works (though I did not check it in detail).
But I think you misunderstood me. The operator is said to be selfadjoint with respect to the inner product
not the inner product
and in the first definition the 1 / w cancels with the w on both sides (your L):
You may want to check this in the book (p.121) cited in the bibliography (accidently, by one of my colleagues here in Vienna).
Peter Schmitt 12:21, 15 October 2009 (UTC)
Remark: In any case, this shows that the article needs either improvement or (preferably?) a "fresh" approach. Peter Schmitt 12:49, 15 October 2009 (UTC)
Peter, this weight function (measure) is tricky. I know of its existence (in group theory it is Haar measure and in the theory of orthogonal polynomials it gives the different kinds of such polynomials). But look at the Sturm-Liouville eigenvalue equation
Suppose you are right, then projection of left and right hand side by u(x) is done as follows
I cannot prove that this is incorrect, but the squared weight does not feel right. To me it seems that the weight is already taken care of by its appearance in the S-L eigenvalue equation. If we take unit weight (as Courant-Hilbert do), then your definition of inner product appears automatically on the right hand side:
If I may make the matrix analogy (Hermitian overlap matrix W):
Then you project as
and I project as:
where on the right hand side we find the inner product of u and v in an non-orthonormal basis. In brief, integration with unit weight seems to me the appropriate inner product in the context of Sturm-Liouville theory, where the non-unit w appears already in the equation. However, I cannot prove it, it is my gut feeling (and I may call upon the authority of Courant and Hilbert, although I realize that that is an invalid argument).
--Paul Wormer 13:46, 15 October 2009 (UTC)
The above discussion is EXACTLY the reason why I joinged CZ! Argue merits instead of evil reverts and malicious attacks.  :) David E. Volk 14:03, 15 October 2009 (UTC)
I think it is supposed to be read as
showing that (with weight) is selfadjoint because is selfadjoint without weight, and
It is not a question of correct and incorrect, but rather of "traditional" and "modern". It is the same vector space, but with another measure for orthogonality. Peter Schmitt 14:41, 15 October 2009 (UTC)

[unindent]

Peter, I'm not sure what you mean by

I suspect that I would write this as

If that's correct then Λ acts on 1/w and 1/w does no longer cancel the weight w appearing in the volume element.

Do you agree with the following?

This follows, because (IMHO): Λ = Λ and w is real, i.e., (1/w) = 1/w. Further (or do you disagree with the following inequality Peter?):

It is as with matrices: even if H and W are Hermitian, W−1H is not Hermitian (unless W−1 and H commute).

To David: I would like to correct what I see as an error, but I'm not emotional about it, and I just want to make sure that I am not mistaken before I edit the main article.

--Paul Wormer 16:19, 15 October 2009 (UTC)

I would like to comment on this discussion from a completely different perspective. As I have stated numerous times, I am unfamiliar with S-L theory. So when I read the article I read it in the role of a student. My goal is to better understand the material. The discussion in this section of the talk page has helped me to do that.
But, it probably is not appropriate to put this kind of detailed exposition in an encyclopedia article. That suggests to me that there is a place for more in depth discussions in a subpage associated with the article. One option is a tutorial on S-L theory. Another would be to put this kind of discussion in an appendix subpage. But note, doing this somewhat changes the goals of CZ. This goal is expanded beyond that of building an encyclopedia to building an information resource. It becomes more like a library. I think this is a topic that the draft charter committee should examine, since the concept of subpages seems to move us in that direction anyway. Dan Nessett 16:38, 15 October 2009 (UTC)

Now we come to the point: I read it as

Remember, in the article L is defined as , simply as scalar factor for each x(similar to the multipliction with a diagonal matrix). Since w is real these factors remain unchanged in the adjoint operator

Peter Schmitt 17:50, 15 October 2009 (UTC)

I don't understand what you're saying here. I repeat once more my arguments: the operator Λ is a differential operator, 1 / w(x) is a function of x, hence the two do not commute (if they would quantum theory would not exist). The notation is ambiguous, is it
A diagonal matrix does not commute with an arbitrary square matrix (except if the diagonal matrix is c E). This is the content of Schur's lemma.
--Paul Wormer 07:44, 17 October 2009 (UTC)
You are right, Paul. Mentioning diagonal matrices was wrong (I was thinking of cE). But I still don't see your problem. Applying on u means applying and then dividing the resulting fuction (pointwise) by w. Since w is real this remains the same if you use the adjoint operator. You divide after the (adjoint) is applied on v. It need not and is not meant to commute as an operator. (In this sense it is like a scalar, a scalar function that is cancelled by the weight of the inner product.) (Moreover, I trust the book - link above - by my colleague who is a specialist in differential equations.) Peter Schmitt 09:54, 17 October 2009 (UTC)
Let me make it simpler, consider D ≡ d/dx. Then for arbitrary differentiable function f(x)
while
The two expressions are not equal, unless w is independent of x.
--Paul Wormer 10:10, 17 October 2009 (UTC)
Sure, but that is not what I am talking of. I am talking about the operator
and the claim is that its adjoint is
since this holds without the weight function (the adjoint being just complex conjugation). Peter Schmitt 14:00, 17 October 2009 (UTC)

[unindent]

I edited Adjoint (operator theory) showing that [D 1/w(x)] = 1/w(x)D. This is true for any set of operators and 1/w(x) is nothing but a multiplicative operator. Since w(x) is real 1/w(x) = 1/w(x). This does not mean that the position of 1/w is irrelevant. Your notation

is allowed if and only if D and w commute.

--Paul Wormer 14:23, 17 October 2009 (UTC)

But 1/w is not used as an operator here, moreover, there are two different inner products involved. Explicitly:
We consider a Hilbert space H of complex functions with an inner product
Let L be a selfadjoint operator on H
or explicitly
Now we consider a Hilbert space of (the same) complex functions with another inner product
depending on the real weight function w.
Define a linear operator by , i.e.,
Then
That is
and therefore is selfadjoint in the second Hilbert space. Peter Schmitt 19:41, 17 October 2009 (UTC)

Making sure credit is given where credit is due

Since I have decided to no longer contribute to this article, due to my lack of expertise in the underlying field, I want to get on record who is responsible for the orthogonality proof on the Proofs subpage while I still remember to do it. I am recording this on the article talk page, since from the history of Proofs subpage the correct credit is not evident.

The proof of the orthogonality relation was developed by a friend and former colleague of mine at LLNL, John G. Fletcher. While I converted it from MS Word to mediawiki markup, I had nothing to do with the proof's logic. I state this since readers of the history of the proof page might come to an incorrect conclusion by virtue of the fact that I am recorded as the person who added the proof to the page. Dan Nessett 18:02, 16 October 2009 (UTC)

New section

I start a new section, the one but previous one became too long.

First remark, Peter ask yourself the question:

Second remark, I don't think you apply the turnover rule correctly. To show how I think it should be done, I will put the bra in square brackets and consider your starting integral. Divide by real w in the bra in order to redefine the volume element (the integration measure). Apply the usual definition by means of the turnover rule of the adjoint w.r.t. the measure w(x)dx (the last equality in the next line gives the usual definition of by the turnover rule in--what Peter calls--the second Hilbert space):

where,

and

Hence using again that 1/w(x) is real

On the other hand, dividing by w in the ket and again using the definition (turnover rule) of adjoint w.r.t. w(x)dx gives for the operator (1/w(x)) L in the ket the corresponding (overlined) form in the bra:

Finally comparing A and B

All I did here is prove the very well-known rule for adjoints of operators P and Q


In the meantime I adapted the main text because this discussion convinced me that Courant and Hilbert made the correct transformation, and following their lead I edited the text.

--Paul Wormer 08:40, 18 October 2009 (UTC)

Paul, you are very patient. Obviously, I have not been able to express myself clearly enough. We are talking about different things.
  • You consider the composition of two operators L and W where W is defined by
i.e., W(f) is the pointwise product of the two omplex functions f and 1 / w.
Of course, this composition is not commutative (unless in special cases): .
  • I consider the linear operator L and a second linear operator L1 defined by modifying L.
L1f is defined by
i.e., L1f is the pointwise product of the two omplex functions Lf and 1 / w. (This multiplication clearly is commutative.)
  • In the first formula of your "second remark" the transformations are not those I have used. They should be:
  • Regarding the "two Hilbert spaces": It is the same vector space, but with different inner product.
is a Hilbert space homomorphism (it is linear and ).
Peter Schmitt 10:01, 19 October 2009 (UTC)
I understand fully the point (and I agree with it) about the two Hilbert spaces, see also my modification to the main text where I introduce y and u. However, I maintain that the following is not correct,
In words you define L1g(x) acting in the ket as: (1/w)Lg(x). Namely you say: "first apply L to g then divide the resulting function pointwise by w". You probably define L1f(x) acting in the bra as: "first apply L to the complex conjugate of f then divide the resulting function pointwise by w". You can define this in words, but IMHO this is not what the equations tell you. I follow all my books on linear algebra, functional analysis, and quantum mechanics and say that the definition of adjoint is such that the order of L and 1/w is reverted. I hoped to convince you by introducing the auxiliary function h(x) in the bra, but apparently I failed.
Let us stop now this discussion and let's concentrate on the changes I made to the text.
Basically I used a third (symmetric) option:
As I wrote before, I know this trick from the matrix generalized eigenvalue problem and I saw it in Courant-Hilbert (loc. cit.) in the context of the operator S-L problem. Maybe you'll agree that my changes are correct. In any case, in your view the original text is correct, so probably you'll find my changes needlessly complicated (if not wrong), I'm curious to hear your opinion.
--Paul Wormer 12:05, 19 October 2009 (UTC)

(unindent)

Paul, you want to end this. But I hope you will bear me once more. Starting from a selfadjoint operator I define a linear operator which is also selfadjoint. To show this I'll only use explicit computation, no bra ant ket manipulation that I could possibly get wrong :-)

Assume L (on a space of complex functions) is selfadjoint, i.e.,

  • L1 is defined as follows: L1f is the function
(where w is a positive real function), i.e., L1f is the (commutative) product (in the function algebra) of the two functions Lf and 1/w.
and is linear .


Claim: L1 is selfadjoint. It must be shown that

where
We abbreviate F=Lf, G=Lg, F1=L1f, G1=L1g. Because of (2) it is .

It is:

  • Since (1b) is it follows that (4a)=(4b), thus (3).

Peter Schmitt 19:55, 19 October 2009 (UTC)

Your changes

Paul, you write:

"The proper setting for this problem is the Hilbert space L2([ab],w(xdx) with scalar product:"

But in fact you are still using the scalar product without weight when you write:

""

The first integral is the scalar product, and the second is a computation (much like my transformations) after substitution.

Peter Schmitt 23:02, 19 October 2009 (UTC)

I didn't write that sentence, it comes from WP and I tried to change as little to the text as I could. As I understand you there are two Hilbert spaces: L2([ab],dx) (containing the u's) and L2([ab],w(xdx) containing the y's. When I say something like that would you agree?
--Paul Wormer 06:46, 20 October 2009 (UTC)
I did not think of it at once, but the substitution is a homomorphism from the weighted space to the "weightless" one:
.
This maps an operator L1 on
and preserves adjointness. If we use the operator as defined above
This shows that "your" substitution and "my" operator are essentially the same method, described differently.
Peter Schmitt 08:11, 20 October 2009 (UTC)
Wiping sweat from my brow: does this mean that we finally agree? If so, could you, wearing your math editor cap, please adapt the main text? Another thing, I've been reading about the Arzela-Ascoli theorem, and I wonder is the statement about the compact operator and the Arzela-Ascoli theorem (unintelligible as it is to the non-mathematician) correct? I read about uniformly convergent sequences of functions on compact metric spaces that are equicontinuous (as I learned yesterday from Courant and Hilbert: gleichgradig stetig) according to this theorem. As a mathematical layman I don't see the connection with the resolvent and its complete set of eigenfunctions. --Paul Wormer 08:48, 20 October 2009 (UTC)
PS You write "homomorphism", why not "isomorphism"?--Paul Wormer 08:52, 20 October 2009 (UTC)
Paul, we agree if you admit that my transformations are (also) correct :-) I think that I never claimed that you are wrong with the way you did it -- only when you didn't accept mine ;-).
I avoided "isomorhism" because, depending on the domain of the functions and on the weight w, the two spaces may be different vector spaces ( and may not exist for the same pairs of functions.) Of course, I could have used monomorphism instead, but this would also not be exact because the domain may be a subspace only. (Then it is an isomorphism between subspaces.) If you take continuous functions on a bounded interval and then, of course, it is an isomorphism.
Concerning editing the text: I am thinking about distributing the content on three pages: S-L equation (for the equation, results about the solutions, some examples; the direct orthogonality proof would fit there as a subpage), S-L operator (for the operator formulation and application of functional analysis) and S-L theory for a more general overview, including some material on the history of this field. What do you think?
Peter Schmitt 13:51, 20 October 2009 (UTC)

[unindent] It is a good idea to split Sturm-Liouville, so please go ahead. When I said "isomorphism", I had in mind (which I would express as "real positive-definite w", finiteness being understood by the average physicist). I did not think of the domain of f and g, quadratic integrability, etc. This is the main difference between a mathematician and a physicist: A physicist believes that all functions are infinitely differentiable and that all series converge uniformly. A mathematician believes the opposite until he can prove that it is otherwise. --Paul Wormer 15:55, 20 October 2009 (UTC)

This difference is a good basis for cooperation ... Peter Schmitt 22:15, 20 October 2009 (UTC)