This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Messages - cold one
ok, having trouble reconciling all that. ........ on that particular front, I think I have to stop trying to learn at a handwavey level and resume actually learning the material.
different question. are photons real or not? on the one hand, they are one of the force carrier bosons (for EM in particular), and I think I have read a couple times that those force carrier bosons are just an artifact of the perturbation theory computations, which left me with the impression that they are possibly or presumably not real things. .... on the other hand, basic quantum mechanics was in large part historically motivated by the idea that photons are quite real, and I feel like my quantum professor would have strangled someone who said otherwise. ..... does the resolution to this apparent contradiction have to do with the EM force mediators being virtual photons, as opposed to garden variety photons?
Whether something is real is a question above/below my paygrade (maybe ask a philosopher). But I think you'd have a hard time arguing that photons are any more or less real than electrons or protons. They're a very similar sort of thing. On top of that they can burn you, you can literally see them (or maybe you can see bunches of a few, but I don't think it's more than just a few), and in a lab you can easily produce, detect, and manipulate them.
Now, it's true that in QFT Feynman diagrams there are "virtual photons" - but it's not just photons. Every particle species shows up that way in some Feynman diagrams and virtual particles of every type contribute at some level to just about every process in nature (virtual particles are just internal lines in Feynman diagrams, and they don't satisfy the classical equation of motion - for a virtual photon E^2 =/= p^2 c^2). It's also true that the leading QFT diagram that reproduces the Coulomb potential in the Born approximation has a virtual photon line.
so QFT, unlike QM, GR, and all (?) of classical physics, does not have a master differential equation. is this because time is treated differently in QFT? (something to do with it being used to index states, rather than as a regular variable?)
QFT = QM, just with lots of degrees of freedom. It has a Hilbert space, a Hamiltonian, and a time-evolution operator. It also has a Schrodinger equation for the time evolution of the wave functional. So if you think the Schrodinger equation is the "master differential equation" for QM (I don't), then you could say the same for QFT.
The place where time really plays a different role is in quantum gravity, where the freedom to change coordinates makes time more like a redundancy of the description than a real variable. As a consequence the Schrodinger equation there is that the time derivative of the wavefunction is zero (H \psi = 0, if you want).
from some googling around, it sounds like the schwinger-dyson equation might be what I've been looking for (or at least the closest thing to it).
I wouldn't call S-D a "dynamical" equation - time doesn't have much to do with it, apart from the appearance of time-ordering.
I realize I may have been asking the question wrongly. I also gather SD is several steps removed from the procedure you laid out. ..... from what you and wikipedia are telling me, i understand that the scattering matrix is a limiting case approximation (of some system) for a particular problem that happens to be of broad relevance (particles coming in from infinity, doing any of a number of local interactions, and going out to infinity). and then the perturbation theory is performed based on that. and as you reminded me, the perturbation theory in basic QM does not even give a full approximate solution, it's an incomplete approximate solution which hopefully has enough information for your purposes. so fair enough on saying it doesn't count as "solving" an equation, assuming you have one.
Well, you can think of S-D as a step towards deriving the Feynman rules. If you look at the equations at the bottom of that wiki page, you see some perturbative expansions for time-ordered correlation functions. (Just keep applying S-D to go to higher orders.) There is another formula, called the LSZ reduction formula, that relates time-ordered correlators to the S-matrix.
does the scattering matrix at least correspond to a limiting case of the schwinger-dyson equation? it sounds like heisenberg et al. built it up heuristically, without SD known to them a priori. but can the s - matrix be derived from SD, by declaring your limit and throwing out negligible terms?
Yes, see above.
maybe? I feel like there has to be a dynamical equation floating around here somewhere. don't statements about physics ultimately always take that form?
also, looking at cold one's steps 1a and 1b.... my understanding has been that the main (only?) purpose of stating a lagrangian or hamiltonian function is to derive an associated set of dynamical equations from it. and I understand perturbation theory to be a method of approximating solutions to an equation. .... so, to me, both steps 1a and 1b point to the presence of some unstated equation in between the two.
To give more details on this - think about perturbation theory in quantum mechanics. There, you're not really solving a dynamical equation. Instead, you're working out the (say) energy levels of the Hamiltonian subject to a perturbation that makes it hard or impossible to find them exactly and/or to solve the Schrodinger equation exactly.
Perturbation theory in QFT is similar, except the thing you're solving for isn't the energy levels. Instead, you want to compute amplitudes (and hence, probabilities) for the results of experiments you can actually do. You can't measure the energy because it's spread over the entire universe (remember, this is the Hamiltonian of a field theory, and the fields depend on space, so the energy is an integral over all space). What you can measure are things like scattering amplitudes - you smash a few particles together and see what comes out. So the perturbative expansion in QFT is designed to compute that, not solve a Schrodinger equation, or classical field equations of motion.
So no, there's no hidden dynamical equation that's being solved. The quantities you're computing are the probabilities that a given an initial state "far in the past" (say, 2 particles of some definite type that are far apart but flying towards each other) will evolve after a "long" time into some definite final state (say, three particles of some other type with some other momenta that are far apart and flying apart).
I'm assuming the actual equation(s) being approximately solved at 1b are euler-lagrange or hamilton's equations, respectively....?
Sort of/not really.
You can of course solve those equations using perturbative expansions, and you can draw "Feynman diagrams" to help organize that. But that's not what's done in QFT. In QFT there are two things going on: first, the classical theory is non-linear, and second, it's quantum. The perturbative expansion in Feynman diagrams takes care of both of those. So it's not just solving the classical equations of motion, it's also solving the quantized theory.
also, all the talk about symmetries.... I gather they are baked into your lagrangian or hamiltonian when you define it at 1a....? I know (at a handwavey level) that each symmetry corresponds to a conserved quantity because of noether's theorem. I think I read that each group gives rise to a force-mediating boson for each of its generators. (so in qed there is one generator (photon) because its symmetry is defined by a one-parameter group. the weak force has 3 mediator particles, because it has a 3-generator group, etc.)
one piece of the puzzle I have no idea how to connect to any of the others is gauge. the only thing I know about gauge is that in classical e&m, the magnetic vector potential is only defined up to an arbitrary gradient term (because taking the curl to get the magnetic potential kills any gradient). since the gradient term does not (edit) matter for most (any?) purposes, the ability to change it freely is known as gauge symmetry. I think I know that this is indeed related to "gauge" in qft, but I don't know how it connects to any of the other qft things we're talking about here.
Well, gauge invariance isn't exactly a symmetry. A symmetry is something that when given a solution (let's talk about classical physics, so a classical solution), the symmetry generates another, different solution. For instance, rotation invariance (which means that angular momentum is conserved, via Noether). Gauge invariance is different - it's (except in a limit) just a redundancy. When you act on a solution with a gauge transformation you get the same solution, not a different one. However there are special gauge transformations (the ones that are constant in space and time) that are real symmetries, and those produce a real conserved quantity (charge).
can you help me get a handwavey overview of how someone does qft? what does the process look like? here is the state of my understanding...
My rusty recollection of basic quantum mechanics:
a. # of particles
b. Particle exchange symmetry
c. force between particles
d. classical potential
2) solve schrodinger's equation, given 1a-d.
[Solution may be analytic, approximate (perturbation theory, e.g.), or numerical. yields a wavefunction, as a sum or integral of eigenfunctions (depending on whether the boundary conditions quantize them) of the system.
OK. In that style:
a. the Lagrangian or Hamiltonian density of the classical field theory you're quantizing.
b. assuming the theory is perturbative (i.e. there is a small parameter you can use to do perturbation theory, like a small coupling constant), work out the rules for perturbation theory (typically in the form of rules for Feynman diagrams)
c. calculate quantities like decay rates of unstable particles and scattering cross-sections to the desired order in perturbation theory, which can then be compared to experiment
Obviously this is not exhaustive, but that will do for a start.
KG and Dirac are just the Euler-Lagrange equations for two (different) Lagrangians. Generally in classical mechanics Euler-Lagrange equations are related to the HJE equation, yes - they're equivalent. But it's generally pretty hard to use the HJE for field theories.
trying to teach myself QFT here. still early days. a few questions....
should I picture a Fock space as an infinite triangular matrix in some kind of fucked up outer product with a Hilbert space? i.e., the first column is 1-particle states, and has only one nonzero element. the second column has two nonzero elements, for the states of each of two particles. etc. and corresponding to every element of the matrix is an entire copy of the Hilbert space. so there's sort of a third index for the eigenfunctions of the space, or something like that.
why does every book on QFT make you learn the klein-gordon and Dirac equations after acknowledging they were both wrong/flawed?
Fock space = QFT Hilbert space (the whole thing, not s subspace) organized in a useful way for the operators you care about in weakly coupled or free QFTs. You can think of it as a tensor product of single particle states, yes, because every basis state in Fock scan is some number of creation operators acting on the vacuum.
There's nothing wrong with the and Dirac and KG equations if you apply them correctly. They are the equations of motion of the free classical theory you're quantizing, and the quantum field operators satisfy them too (again, in the free theory). They just aren't the relativistic generalization of the Schrodinger equation, which is maybe what you are referring to.
A diode in effect captures the electricity produced and it is drained away doing work so there is no charge buildup. The graphene, like the flag, is being "waved" about by thermal energy in the environment. Each time the matrix is deformed, supposedly they is a charge build up which is drained away. The molecule gets jostled away and back again so the local environment cools but is replenished from the larger environment. At least this is the way I understand what they are saying.
Again, it's sort of like the drinking bird. As long as the molecules are jiggling and moving charge around, they are producing organized energy. So COE is definitely not violated. The material isn't a gas so I don't know if that matters at all. But one can think of the material as being waved around by sonic energy in the environment and potentially capturing that so its not clear why random thermal energy won't work.
I'm not saying it will, I just haven't found the "catch" yet.
There are a lot of possible catches. For instance, maybe the graphene is at a different temperature than the environment. By the way I glanced at the published paper that article is based on and there's really nothing in there about extracting energy. It's just about measuring the fluctuations of the graphene. So the article is fluff.
The bottom line is that once the graphene+environment come to equilibrium, it's impossible to do work with the motion of the graphene. Think about it - "doing work" means moving some macroscopic object a macroscopic distance. For that to happen a lot of microscopic motions have to conspire. For instance, a lot more molecules have to hit one side of a piston than the other. But you can't control those motions because in equilibrium they are random (if you are controlling them, like confining some gas with higher pressure to a syringe, the system is not at equilibrium). So the only way they can do work is by a coincidence. But then you can calculate the probability for that coincidence to happen. A typical result will be probability = 10^(-(10^23)).
I completely understand that a heat engine cannot do that as it runs on statistical principles. The question becomes is there a way to selectively harness kinetic energy of individual molecules using another type of device such as described (conversion to electrical energy)?
No, for the same reason. The second law applies to everything, so heat engine or no heat engine is irrelevant.
If you take a look at the referenced article, is what they are stating physically impossible? It does seem like a violation of the 2nd law so I'm confused if there is a catch or not.
Hard to tell, but not necessarily. It sounds like the motion of the graphene they're observing is non-thermal. If so you might be able to extract work from it, but not indefinitely - it will eventually reach equilibrium and then you won't get any more work out of it (unless you "reset" it by doing work ON it). In other words it might act like a kind of battery or compressed spring.
Like a flapping flag, one can capture energy from wind by taking the energy of random motion and shaking an attached magnet next to a coil of wire. This produces a current which can be rectified and used to do useful work. The energy comes from the wind being slowed by the flag (no energy creation). The books all balance because the energy of the closed system never increases. But a type of randomness is used to do useful work.
Wind is not random motion. It's the opposite - it's coherent motion of an air mass (relative to the earth). That's why it can be used to generate energy. The equilibrium state of the atmosphere has no wind.
Can one do it on a thermal molecular level by capturing the energy of vibrating molecules and using it to do useful work? I don't know the answer to that.
No, at least not if the vibrations are thermal. Doing so would violate the second law and is impossible (or more precisely, incredibly unlikely).
OK, the question is, what happens after?
What happens when the air in the can expands and does work on the piston? We have this thing called the first law of thermodynamics that says that the heat is converted to work, therefore the air that did the work loses heat as it does the work on the piston and so gets cold which reduces the pressure in the can allowing the piston to return to its starting place.
The Second law however states that the above scenario is IMPOSSIBLE. The piston cannot return to its starting position without first cooling the can and cooling the air in the can by "ejecting" at least some of the heat to some external cold "reservoir". According to the second law, not ALL the heat can be converted into work. There will always be some "waste heat" to be removed.
Why? Well, it all becomes rather vague IMO. Something to do with entropy. The "universe" must move towards greater disorder. Whatever.
There's nothing remotely "vague" about it. Entropy has a clear, mathematical definition - it's just as precisely defined (and closely related to) the other thermodynamics quantities like heat, work, etc. If you know how, you can do calculations with it.
Moreover it's perfectly clear intuitively why the second law holds. You cannot convert the kinetic energy of gazillions of molecules of air into (say) compressing a single spring (and make no other compensating change to the system) because it would require a miracle to do so. All the air molecules would have to conspire to strike the spring in the right way at the right time. Because there are so many air molecules, the odds of that happening are absurdly small. It doesn't violate conservation of energy, but (for all intents and purposes) it never happens.
A good analogy is is this. Make a movie of just about any interesting process - breaking an egg, throwing a rock into a pond, landing or crashing a kite or glider, etc. Now run the movie in reverse. Nothing about the reverse process violates any law of physics except the second law. So the second law is what tells you that ridiculous things like water currents conspiring to push a rock up out of a lake and sending it flying into the air don't happen.
But somehow it accomplishes the useful task of concentrating the energy into a more useful form.Yes. Apparently that has to do with decreasing the entropy of the gas such that it now becomes possible to exploit the energy that was in the gas prior to even compressing it. I'm hoping someone (like PPNL) can confirm this understanding.
There is energy available in the gas, so lack of energy cannot be what determines whether or not you can do work. Instead what matters is whether or not the entropy is maximized. In your experiment, the air in the syringe is at a different pressure than the air around, and therefore the system is not at equilibrium, therefore the entropy is not maximized, and therefore it's possible to use the syringe to do work. The net effect will be to bring the system closer to equilibrium, increasing the net entropy.
From it's inception, the second law of thermodynamics was on shaky ground. Based on a completely fallacious theory of heat, mathematically inaccurate, eventually largely repudiated by Carnot himself. The "Ideal Gas Law" has no room in it's equations for the inter-molecular forces that make Joule-Thompson cooling of an expanding gas and therefore air-conditioning, refrigeration, heat pumps etc. possible. Modern nano-technology is assaulting the second law on several fronts simultaneously, such as the transmission of heat to outer space we've already discussed and nantennas.
If you insist that a 200 year old assertion, that it is impossible for a heat engine to operate, drawing heat from a single reservoir, is absolutely inviolate, there isn't much to talk about here and I suppose it is a waste of your time.
What a bunch of ridiculous nonsense.
The ideal gas law has pretty much nothing to do with the second law. Nanotechnology has made no assaults whatsoever on the second law. Why? Because the second law is nothing more (or less) than statistics. It's true for the same reason you're not going to flip a coin and get heads a trillion trillion times in a row.
Statistical mechanics - and as a consequence, thermodynamics - is arguably the best established and most beautiful part of physics, because it relies only on extremely simple postulates.
The compressed air tank is not a closed system, more air is introduced. P, V and T can change but not 'n'. If 'n' changes, not closed.
The scenario is a large closed box full of air, inside of which is a piston, initially open. The piston is closed and then compressed (using a battery, say). This heats the gas and increases the pressure inside the piston. After a while the heat dissipates to the air outside the piston (but inside the box). Now everything is at the same temperature, but the compressed air in the piston can still do work.
When there is no temperature difference in a closed system, entropy is maximized for starters
Again, spork's cylinder of compressed air is a counterexample to that assertion.
Equal temperature is necessary but not sufficient to imply equilibrium or maximum entropy - for that, all aspects of the system (such as pressure, chemical constituents like batteries, etc.) must be "thermalized".
If the air in the greenhouse is heated by the sun, sure you can do that, it's just an inefficient solar power plant. If it's heated by a heat pump, see above.
Earth's atmosphere is heated by a heat pump?
Earth's atmosphere is heated by the sun. You can obviously use solar power to do useful work; your suggestion to attach a heat engine to a greenhouse is a particularly inefficient way to do so.
My other point was that if you had a "greenhouse" (or box, or whatever) that you heat with a heat pump but not with the sun (or any other external source), and then you attach a heat engine to it, you will always operate at a net loss.
I am saying that somewhere in the system you need a temperature differential or there is no entropy.
You presumably mean, there must be a temperature difference because otherwise the entropy is maximized and nothing can do work? That's not the case. The air in the compressed piston that spork brought up is a good example - there, after the temperatures have equilibrated the difference in entropy comes from the pressure being different. A battery at room temperature is another example, or enriched uranium, etc.
So no, heat pumps do not require temperature differences to operate.
Try running a combustion engine in a blast furnace. It won't go.
Here I can't even guess what you're getting at.
Heat moved by heat pump -> Heat converted by heat engine -> converted energy out ->
You can do that, but at a net loss.
Why do you assume it would be at a net loss necessarily? Or how do you know?
(1) Because the laws of thermodynamics say so.
(2) Because otherwise you'd have a perpetual motion machine.
(3) Because it's totally obvious. What you're proposing is the thermodynamic equivalent of (say) using an electric pump to send water from a ground level tank up into an elevated tank, and then letting it come back down to ground level, driving a turbine that is plugged back into the pump. Obviously this will operate at a net loss, and so will a heat pump-heat engine chain, for the same reason.
Let me ask this; is there any reason one could not use heated air from a greanhouse. Use that hot air to run a heat engine, which would leave colder air. Return the cold air to the greanhouse to be reheated and continue the process? Without a net loss that is.
If the air in the greenhouse is heated by the sun, sure you can do that, it's just an inefficient solar power plant. If it's heated by a heat pump, see above.
Heat pumps do require a temperature difference.
Maybe you're confusing heat pumps with heat engines? Heat pumps do work to create or increase a temperature difference (like an air conditioner).
The idea that a heat pump is the reverse of a heat engine.
The two machines are fundamentally different in function and operation. I do not see how one can be "The opposite" of the other.
They are opposites in that a heat pump does work to increase a temperature difference, while a heat engine uses a temperature difference to do work.
Heat pumps don't CONVERT anything. They don't CONVERT cold into heat or any other form of energy. A heat engine DOES CONVERT heat into other forms of energy. It does not merely MOVE heat from a hot source back into a "cold source."
Again, a heat pump does work to produce or maintain a temperature difference, while a heat engine uses the temperature difference to do work.
Heat moved by heat pump -> Heat converted by heat engine -> converted energy out ->
You can do that, but at a net loss.
After thinking about it, I'm pretty sure it is all contained in PV=nRT.
Which says, that if you have a fixed amount in moles of gas material, that it will be contained in a fixed size container at a specific pressure and temperature. If you compress it by making the container smaller, the pressure will rise as well as the temp. But if you let it cool back down to ambient so it is at the original temperature again, then for the equation to be true with a smaller volume, Pressure will have to be higher. That higher pressure can do work. But the total energy contained (which is less than what you started with because some of it was lost to the environment as heat) is still defined by the temperature for that many moles of material at the given pressure and volume. But if you expand the volume back to the original volume, it will cool and will have to recapture the energy back from the ambient environment to get back the original pressure.
An air compressor on the other hand changes the number of moles of material in the tank but once the compressor stops, and you know how many moles you have in the tank at some pressure level, the energy is defined by the temperature. Or you can pick two other variables and say the energy is defined by the one remaining. In an air compressor tank (garage variety), the degrees of freedom are the pressure, temp, number of moles, but not the volume since it is a fixed tank.
That's all correct, but I don't think it addresses the point spork found confusing, which is this (if I understand him): after you compress the gas and then let it equilibrate with a surrounding atmosphere, its energy is back to where it started. How then can it do work on anything?
The answer is that energy is not the limiting factor here. After all, the atmosphere has tons of energy in it. What's potentially lacking is a subsystem that's out of equilibrium, with entropy that's below the maximum value. Compressing the gas creates such a subsystem even after you allow its temperature to equilibrate.
The claim of ddw is a CRACKPOT claim and you people in the cargo cult are just insane for believing it besides being too GUTLESS to even look at a high frame rate video which will CONFIRM the cart is oscillating up the belt with periodic loss of traction that probably occurs at the propeller blade pass frequency. Indeed, it is impossible for such a lightweight cart to maintain constant traction on the belt once released since the rotational KE of the propeller is more than enough to cause the wheels to lose contact with the belt.
They are not false, and of course they are two of the main reasons you want to argue over this.
Yes they are, and your source textbook has now been changed to correct that error.
I'm curious if you're ever going to acknowledge you were wrong about this. Think about it - after frantically scouring the internet to counter the overwhelming evidence (math, computer simulations, fluid dynamics textbooks, physics arguments) against your assertions, you finally come up with an authoritative source that clearly supports you. An oceanography textbook! What could possibly be more authoritative?
And now, the new edition of that same oceanography textbook has been changed precisely to correct the error you've been advocating. And what do you do? Pretend it's not there and post the same old red herrings and diversions and "because FysiX" arguments you always did.
Is this how people argue against global warming? Sad, because there the truth really matters.