Jump to content

Talk:Tensor product of modules

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Nomen4Omen (talk | contribs) at 08:43, 29 May 2015 (Property 3 differs slightly from bilinearity: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Tex vs. HTML

(I changed most of the HTML at the beginning to Tex.) - Sorry, I didn't realize this was such an apparently charged issue. The bilinearity axioms still render as non-Tex on my settings though. I'll let you decide what to do now.

Also, I added the universal property because the article was referring to an universal property "above", which wasn't there. Then I found the universal property below, so It's duplicated now. But I don't want to delete it; I think I already messed with the article enough :) Functor salad 13:16, 25 September 2007 (UTC)[reply]

Verification questions

Hello all. There is an implicit claim here which is along the lines of "if the tensor functor is not exact, minimal generating sets may not be sent to minimal generating sets, but when the functor is exact minimal generating sets DO go to minimal generating sets". Vector spaces (the functor is always exact in this case) are given as an example, but I've only seen this proven in an ad hoc way which doesn't mention exact functors. The claim seems plausible, but I would like to ask a more senior category theorist to double check what is written and provide confirmation. Thanks! Rschwieb (talk) 20:19, 22 February 2011 (UTC)[reply]


Why isn't it written that the tensor product is again an R-module?

I've learned to be cautious with mathematical texts: when something obvious is not written it often means that it's wrong. But here, what's the point of doing a tensor product if we don't get what we want. Why do they specifically say the image is an abelian group when one naturally wants to say it's automatically a moduleNoix07 (talk) 14:25, 4 August 2014 (UTC)[reply]

It says in the lead that the result is a module ("resulting in a third module") – provided that we start with modules over a commutative ring. In the case where we start with a left- and right-module and the ring is non-commutative, your observation is accurate: we do not end up with a module, only an abelian group, which is to say, addition is defined, but there is no scalar multiplication for the resulting object, which would be necessary to make it a module. This is explained under Tensor_product_of_modules#Multilinear_mappings. Can you suggest where changes in wording would distinguish the two cases more clearly? —Quondum 14:54, 4 August 2014 (UTC)[reply]

Other universal property?

One has the other universal property: Let be a ring and let be left -modules. If is a left -module and is an -module homomorphism then there is a unique map making the obvious diagram commute. This can be useful (e.g. in commutative algebra) when one does not want to deal with cartesian products, which a priori have no real structure. — Preceding unsigned comment added by Mobius stripe (talkcontribs) 2014-11-23T15:05:41

I think something (i.e., namely N) is missing in the input. There is
when S is an R-algebra. (This stuff is perhaps worth mentioning.) -- Taku (talk) 15:42, 6 April 2015 (UTC)[reply]

RZ RRR R ?

The article says in Tensor product of modules#Definition:

It can be shown that RR R and RZ R are completely different from each other.

Question: could this be made a bit more specific?

Certainly RZ R = R, because the bilinear map of R × RR is the multiplication in R.

But also RR R = R for the vector space R/R.

So certainly the two are different in algebraic structure, i.e. RZ R as abelian group is furnished only with the trivial scalar product. But the two are the same as sets and as abelian groups. Aren't they ?

--Nomen4Omen (talk) 16:32, 25 January 2015 (UTC)[reply]

(Disclaimer: not my area, but this may give some pointers; it may help to make this clearer in the article.) See Tensor product § Definition – this allows us to think of the tensor product as a quotient of the Cartesian product, considered as a free Z-vector space. The scalar multiplication is the same under all cases under the definition given in this article (only multiplication by Z is defined), and thus even RR R is a Z-vector space, not an R-vector space). What is significant is that the equivalences are different: xrZ y = xZ ry only when rQ or x = y = 0 under this quotient AFAICT. You can say that RR R = R, the additive group of the real numbers (and is the same as real multiplication, but for the lack of scalar multiplication on the resulting algebraic object), but RZ R is not the same as multiplication (Z-bilinearity and R-bilinearity are not the same thing), since we get for example that πZ 1 ≠ 1 ⊗Z π. —Quondum 00:08, 26 January 2015 (UTC)[reply]

Tensor product of finite-dimensional free modules

It would seem to me that for the tensor product of finite-dimensional free modules that we have the isomorphism

M ⊗R N ≈ HomR(M, N),

where M is a right R-module and N is a left R-module, R is any ring, and N is the dual module of N. That is to say, the tensor product is an R-linear map in exactly the same way as for when the modules are vector spaces when R is a general ring (in particular, when it is noncommutative), provided a suitable constraint is imposed on the modules M and N. This, or something very similar to it, seems to be confirmed by various discussions on stackexchange. I may also have confused some things here, so I'd be looking for correction from those whose subject area this is.

The question is: should this (or whatever the correct version is) not be included in this article? Reading the article leaves the impression that the usual tensor product on vector spaces (considered as a space of linear maps) does not generalize to non-fields, and especially not to noncommutative rings. Yet, the category of tensor products above seems to be vast and a natural (and sensible) generalization of the more familiar (heterogeneous) tensor product of vector spaces. —Quondum 05:12, 20 March 2015 (UTC)[reply]

More generally, this is true for finitely-generated projective modules (this includes, for instance, vector bundles over compact manifolds and coherent sheaves over varieties). See Bourbaki, Algebra, II.4.1. It is important, and should be added to the article. Sławomir Biały (talk) 13:09, 2 April 2015 (UTC)[reply]

It seems to be directly addressed in Bourbaki, Algebra, II.4.2 (as a specialization of the case considered in II.4.1). As I would intend to write this up to answer the question "How do tensor products (regarded as R-linear maps) generalize to modules?" It seems to me that the above generalization extends these to order 2 over projective modules (including the infinitely generated case, which is not an isomorphism), but not to other orders, unless R is commutative. This suggests two diverging branches of generalization, one restrictive on the generalization with noncommutative R to projective modules, and the other restrictive possibly only on R being commutative. This is still speculation on my part. —Quondum 17:27, 4 April 2015 (UTC)[reply]

I'm late to the discussion, but I would mention a similar case is already noted at Sheaf of modules. I think, as in that article, it makes sense to first define the canonical module map:
where is the dual module. We can then ask whether this canonical map is an isomorphism or not (it is so if E is free of finite rank, for instance).
By the way, this canonical map is really canonically formulated in the sense: when it is an isomorphism, one can interpret it as saying that a F-valued linear map is the same as a (linear combination) of a linear functional times some vector v in F. This is consistent with, say, the intuition with a vector-valued differential form; locally speaking, a F-valued differential form is just a scalar differential form times a vector in F. -- Taku (talk) 23:19, 4 April 2015 (UTC)[reply]
I agree to defining the dual as you have. So, now I'm getting that we have several canonical homomorphisms, and that these exist for any type of module over the same ring R:
  • Given left R-module E and left R-module F, there is a canonical homomorphism ER F → HomR(E, F). [Bourbaki II.4.2(11)]
  • Given right R-module E and left R-module F, there is a canonical homomorphism ER F → HomR(E, F). [Bourbaki II.4.2(15)]
  • In both cases hold in the general case, and become isomorphisms if the modules E and F are restricted to being finitely generated projective modules.
I anticipate several further interesting cases of tensor products acting as module homorphisms. It seems to me that there are many ways to interpret the tensor product of two modules as a linear map, with bilinearity as an interesting possibility. —Quondum 01:26, 6 April 2015 (UTC)[reply]
Okay, so now that I've added that and the world hasn't exploded yet , I'd like to look for further examples of tensor products of modules acting as multilinear maps. My intuition (disclaimer: high probability of confusion due to guessing) says that
  • Over a commutative ring,
    • elements of arbitrary finite n-ary tensor products canonically map tensor products of suitable modules over the same ring into tensor products of modules, with duals used as necessary
    • an element of a tensor product of a module E with its dual maps canonically into EndR(E), and thence onto its trace (Bourbaki II.4.3)
  • In the noncommutative case,
    • an element of a tensor product of a module E with its dual maps canonically into EndR(E), but no trace is defined
    • the map ER F → BilinZ(E, F; R) should be canonical
    • the map ER F → Hom(R, ER F) does not work (multiplication of a tensor product and a scalar is undefined)
    • there is no equivalent of the repeated tensor product over the same ring
    • tensor products can be composed, preserving the linearity/bilinearity properties.
  • In short, for the noncommutative case, it seems to me that tensors build a pretty little tensor algebra, except that the order is limited to 2, and scalars do not mix with tensor products. It would be interesting to see this applied to, say, geometry (e.g. how does one define curvature of an H-manifold?). Where to find this in say Bourbaki would be helpful.
Quondum 03:21, 6 April 2015 (UTC)[reply]
I don't know if I understand the above completely. But I know that if E is a free module of finite rank (or even projective of finite rank) and if R is commutative, then one can identify and then a pairing given by contraction is the same as the trace map tr when it is viewed as an element of (if you pick a basis clearly this is the usual trace). If R is non-commutative, then I don't know how how this thing works. -- Taku (talk) 13:13, 6 April 2015 (UTC)[reply]
I'm feeling my way, so I have a high likelihood of being wrong. We're clearly agreed that the second step (finding the trace) only applies to the commutative case. The first step seem to apply though: we know that ER F → HomR(E, F) is a canonical homomorphism, and consequently so is ER E → HomR(E, E) = EndR(E). My statement above does not take it further. Have I missed something? —Quondum 13:36, 6 April 2015 (UTC)[reply]
Yes, the identification of the tensor product and End is probably true for non-commutative ring as well (hopefully I will reply to your other points later). -- Taku (talk) 02:28, 7 April 2015 (UTC)[reply]
About the definition of a dual module, I liked the old one: E is a left module and is a right module; the idea is that one "uses up" the left action; after that he (or she?) is left with the right action, whence the my correction. The action ultimately comes from R, which is a bimodule. I didn't think the corrected version is confusing. -- Taku (talk) 01:31, 7 April 2015 (UTC)[reply]
The "using up" works either way. No, your corrected version wasn't confusing, it is just that I am trying to order the notation so that the "used up" action of the tensor product is on the "right", adjacent to the module being operated on. This will allow those familiar with matrices just to "drop the brackets". Right modules are like column vectors (which are more common than row vectors); matrices act on them from the left, and scalars from the right. I realize that this is at odds with Bourbaki and much of the abstract notation here on WP; we can debate the merits. —Quondum 01:55, 7 April 2015 (UTC)[reply]
Aaah, that's very good; I didn't think of matrices at all. If you have a matrix in mind, then I completely agree that a right module is a "right" module to use. Please keep up working on the article. -- Taku (talk) 02:28, 7 April 2015 (UTC)[reply]
I've been going through some texts about tensor products and the only text I could find that goes beyond the most elementary level, besides Bourbaki, is Milne's commutative algebra notes: [1]. Unfortunately, it's about commutative algebra and doesn't cover the non-commutative case, but has some interesting stuff; e.g., Lemma 12.11. (and we should probably add some of them). In particular, currently, the article says nothing about the contraction map (paring between a module and its dual is a special case). -- Taku (talk) 22:58, 8 April 2015 (UTC)[reply]
Did you mean that link (contraction map)? It does not seem to apply. The duality pairing does not seem to me to be to apply to tensor products (i.e. the duality pairing is defined, but that does not really apply to a tensor product), and even the trace (the closest think I can think of) applies only to the commutative case. —Quondum 03:49, 9 April 2015 (UTC)[reply]
Sorry, the link should be tensor contraction. As you said, the non-commutative case seems to have a problem (Bourbaki seems to talk about the case when a module has more than two ring actions, maybe that's what is needed?). I don't think there is any issue in the commutative case: the discussion in that linked article applies if the base field is a just a commutative ring. If a module is free, even the index notion applies too (but probably not particularly interesting?) We should consider discussing more concrete cases: for example, if R is the ring of smooth functions, the contraction should be usual contraction in differential geometry (I need to brush up some diff-geo here). This case generalizes to the non-commutative case by replacing R by the ring of differential operators; sheaves of rings of such are more natural but sheaf of modules ignore the non-commutative and so perhaps it makes sense to discuss it here. In this generality something like trace map exists since it is needed for, ah, various purposes. (Azumaya algebra is also interesting non-commutative case) -- Taku (talk) 12:46, 9 April 2015 (UTC)[reply]

Trace of a tensor

The trace, defined via , needs clarification. In particular, not all tensors in can be written as simple products of elements of the respective modules. A few words explaining why (and how) this extends to the whole tensor product space are needed.

As to (not) extending the trace to the noncommutative case, a demonstration of why this does not work is straightforward. IMO, we can put this statement in without much contention, even if we cannot find a source for the nonworkingness of it now. Take the simple case from of the (intended) canonical map (assume that is a right R-module):

We have that

This, assuming a noncentral regular element r of R and that the trace can be nonzero, is a contradiction. Conclusion: the trace, as would fit the canonical map, does not exist if R contains a regular noncentral element. (Maybe I missed a corner case, but that's the idea.) The determinant will have similar issues in the noncommutative case, as should the exterior algebra of a module. —Quondum 17:41, 9 April 2015 (UTC)[reply]

On second thoughts, this does not stop us defining the trace "up to conjugation". I don't know how useful this would be, though. —Quondum 17:53, 9 April 2015 (UTC)[reply]

Yeah, I like that improvement. —Quondum 18:25, 9 April 2015 (UTC)[reply]

First of all, it is standard and not problematic to define a linear map on the generating set, instead of all elements. Since pure tensors () do generate the whole tensor product, it is enough to just map the generators (pure tensors in this case). It is clear (and in fact I just did actually check with a paper and pencil) to see the map extends by linearity. This is mentioned in the important proposition in the definition section (maybe that need to be clarified). As for the example, I have trouble following it ( is right R-linear, am I right?). To me the main issue is the module structure: since is just an abelian group, R-linearity doesn't make sense.. The example makes sense; I don't have a good feeling about the non-commutative case :) -- Taku (talk) 18:33, 9 April 2015 (UTC)[reply]
You just preempted a discourse by me with that last edit ;). I'm not sure about what you're trying to say with the rest; I fully agree, and distinctly prefer the "induced by" approach (from memory, even Bourbaki does it this way). I hope you didn't interpret my remark as sarcasm. —Quondum 19:13, 9 April 2015 (UTC)[reply]

Linearity-preserving map vs. Linear map

I think that when I changed the wording from "linear map" to "linearity-preserving map", my thinking may have been confused. Am I correct in saying that there is no difference between an R-linear map an what I've called a linearity-preserving map (up to isomorphism)? Should I undo this change? —Quondum 05:10, 11 April 2015 (UTC)[reply]

Yes, so I had a problem with that change. I think the idea is in some situation (f-gen projective over a comutative ring) a tensor element can be identified with a R-linear map. So "as linear map" makes sense. I'm not sure what is meant by "linearity-preserving map". It is true that there is a canonical way to map tensor elements (elements in tensor products) to linear maps, but I don't know the name for this map. Bourbaki for instance doesn't give any name. -- Taku (talk) 15:02, 11 April 2015 (UTC)[reply]
  • Okay, I've reversed that edit of mine.
  • About your comment "in some situation (f-gen projective over a comutative ring)", I disagree with the restriction. I believe that it is fully general: every tensor product element can be identified with an R-linear map (and the proof that I have in mind is extremely simple; also, I do not see Bourbaki making any such restriction). The finitely generated projective module restriction only applies if you require the identification to be bijective.
  • What I meant by a "linearity-preserving map": Just as matrices can multiply matrices and not only vectors, tensor elements can act on tensor elements as well as on modules. In this action, the cannot be described as R-linear, but they still have the equivalent property of preserving the right linearity of the objects they act on, and that was what I was referring to. But it is moot: the two properties (linear and linearity-preserving) seem to be equivalent under a suitable identification, so let's stick with the standard terminology. —Quondum 17:25, 11 April 2015 (UTC)[reply]
  • When I qualified with "in some situations", I was referring to the fact that the canonical homomorphism from the tensor product to hom need not be injective. One can map tensor-product elements to linear maps; but without the canonical map being injective, one cannot do identification. (or so I understand. "If" the canonical homomorphism is always injective, we should say so. Otherwise we should give a counterexample for injectivity, if doing so is not too difficult. This article is the right place for such an example.)
  • I'm actually interested in matrix-theoretic interpretation as well. After all, the endomorphism ring of a free module can be identified with the matrix ring (this works even for non-commutative rings; see Ring_(mathematics)#Matrix_ring_and_endomorphism_ring) and that should be interesting. -- Taku (talk) 21:58, 11 April 2015 (UTC)[reply]
We seem to be on the same page. And I agree that the matrix theoretic interpretation is interesting – it develops concepts of linear algebra directly into representations of modules (of which I formally know essentially nothing!), and this article seems to be the right place for it. Specifically, it seems to me that linear maps between finitely generated projective modules in general may be represented as matrices with elements from the underlying ring, and that elements of tensor products of arbitrary modules my be represented in this way (using potentially infinite matrices). (I'm interested specifically in the non-commutative case – the commutative case not so much.) Even if we cover only the finite basis free module case, it will be a worthwhile addition. —Quondum 22:36, 11 April 2015 (UTC)[reply]

Canonical map required to be injective for R-linearity?

The edit summary (not quite unless the canonical homorphism is injective) for this edit suggests that I've run into a problem with my understanding of the meaning of a "right R-module homomorphism", or HomR(E, F) for right R-modules E and F. As I understand it, HomR(E, F) is the set of all maps EF that preserve the module structure (φ ∈ HomR(E, F) implies that φ(e + e′) = φ(e) + φ(e′) (additivity) and (φ(er) = φ(e) ⋅ r (homogeneity)). I see no problem so far: we know (from Bourbaki) that the canonical homomorphism θ : FR E → HomR(E, F) exists, and this just says that θ maps an element ξ of the tensor product to such a φ, θ : ξφ. If the canonical homomorphism is injective, this means there do no exist distinct ξ and ξ′ such that θ : ξφ and θ : ξ′ ↦ φ for any φ. How come injectivity of θ is required for the statement "Thus, an element of a tensor product ξFR E may be thought of as a right R-linear map ξ : EF" to hold? If it is not injective, all this means is that more than one element ξ may map to the same right R-linear map φ, but this seems irrelevant to the statement. —Quondum 15:25, 12 April 2015 (UTC)[reply]

I think this is the matter of attitude; if there is a linear transformation from a vector space V to a vector space W, I don't interpret a vector in V as a vector in W. Each vector in V uniquely determines a vector in W; for me, this is not the same thing as a vector in V is a vector in W; that would require the transformation to be injective. -- Taku (talk) 16:49, 12 April 2015 (UTC)[reply]
Ah, I see what you're saying – fair enough. So we need to find a different wording. The phrase "may be thought of as" is problematic. Maybe "Thus, an element of a tensor product FR E may act as a right R-linear map EF "? —Quondum 17:23, 12 April 2015 (UTC)[reply]
Yeah, that would be definitely the better wording. -- Taku (talk) 19:15, 12 April 2015 (UTC)[reply]

Associativity of the tensor product

Bourbaki, Algebra, ch. II §3.8 deals with this topic. It is clearly a generalization of §§ Several modules, since it deals with the noncommutative case, which specializes to exactly the multilinear case over a single commutative ring. Essentially, given a right R-module L, a R,S-bimodule M and a left S-module N, we get the isomorphisms LR (MS N) = (LR M) ⊗S N = LR MS N. My inclination would be to present the general case as Bourbaki does, and then to specialize it. —Quondum 19:11, 12 April 2015 (UTC)[reply]

That section is not general enough (and requires updates). The general form of the associativity is already noted in the "properties" section. -- Taku (talk) 19:16, 12 April 2015 (UTC)[reply]
Oops, yes, it already does. I'd prefer to have the properties present the general case first, and then the changes that result from specialization to a commutative ring. Does this make sense? —Quondum 00:03, 13 April 2015 (UTC)[reply]
It does but yes as you suggested it's probably not an optimal approach from the expository aspect. What is needed is to discuss the universal properties for multilinear maps and tensor product of several modules. I'm lazy busy and no one is doing the job for a moment :) -- Taku (talk) 01:20, 13 April 2015 (UTC)[reply]
Bourbaki, who know tensors the best, use the notion of multimodule (more than two ring actions on a module, get it?); that's how they handle tensor products of several modules over non-commutative rings. I'm just not sure if we want to use this notion; there is probably a better way. -- Taku (talk) 01:43, 13 April 2015 (UTC)[reply]
I still need to wrap my head around the multimodule concept. This may be getting very deep for a WP article, other than the mention of what can be achieved this way. It is way over my head, but tantalizing. I'll keep my edits to what can be shown using bimodules, for now. —Quondum 02:48, 13 April 2015 (UTC)[reply]
Yargh. This one keeps playing with my brain, and Bourbaki is a bit cryptic for me on this. Am I correct in saying that w.r.t. modules, the left/right distinction for scalar multiplication is mathematically spurious (in the sense that every right scalar multiplication can be considered to be equivalent to a left scalar multiplication of the opposite ring)? This brings to mind a picture of a multimodule that simultaneously supports left, right, top, bottom, (whatever) scalar multiplication, and that the tensor product over one or more of these simultaneously would be possible. There is presumably some compatibility constraint on these scalar multiplications, even if only Z-linearity. Or am I completely missing the intention? —Quondum 18:57, 13 April 2015 (UTC)[reply]
I had exactly the same mental image as you: top action, bottom action, maybe 45-degree action, etc. But actually it's not as bad as it might appear first. The idea is simple: we simply allow more than one left or right action; what is a ring action anyway: it is a group homomorphism . This means each element r in R determines a linear transformation of M; a right action is precisely a left action for the opposite ring of R, as you said. Then it is not much of a leap to consider a family of left actions . When an "obvious" compatibility condition is met, a module becomes a left multimodule. Simiarly, one can consider a right multimodule structure, and then, in an "obvious" way, a module that has both left and right multimodule (or just multimodule). At least this is how I understood. -- Taku (talk) 20:59, 13 April 2015 (UTC)[reply]
This is intriguing because it brings back the possibility of higher-order tensor products, and with it, exterior algebras; I had sort of assumed that went out of the window with noncommutativity. But now I have this picture of building tensor products out of multimodules like one builds covalently bonded molecules out of atoms ... but this may bring with it the possibility of nonisomorphic structural isomers, with consequences to the associativity of the tensor product. —Quondum 21:33, 13 April 2015 (UTC)[reply]
On second thoughts, perhaps the exterior algebra needs nothing more than an (R,R)-bimodule. So noncommutative geometry might still be good using this. —Quondum 21:37, 13 April 2015 (UTC)[reply]

Bilinear map

The section 'Balanced product' defines a bilinear map as a synonym of balanced form. I see that Bourbaki does not refer to a balanced product (although it uses the concept without naming it), and only seems to use use bilinear map in the sense that <au,yb> = a<u,y>b, where u and v are elements of modules, and a and b are their respective scalars. EOM defines it likewise. I have not located other references. Should we should remove this unfortunate use of bilinear map? —Quondum 05:31, 24 April 2015 (UTC)[reply]

I don't know a good solution to this terminology issue. A "bilinear" could be very misleading in the case R is non-commutative; I personally like "balanced product". Some text doesn't even bother to introduce a terminology (they refer instead to a Z-bilinear map with the middle linearity). Bourbaki is s bit old so that might be why they don't use the word "balanced product". Dummit-Foote, Abstract Algebra, a fairly standard and reliable text, uses (if I remember) "balanced product" -- Taku (talk) 14:18, 24 April 2015 (UTC)[reply]
Well, I think that simply dropping the use of "bilinear" in this sense is a good solution, then: it neither sensible, notable nor referenced; it is also in direct conflict with Bourbaki's sensible use of the term. . I'll remove it. I see there is the same problem at Bilinear map, which I'll remove too. —Quondum 15:36, 24 April 2015 (UTC)[reply]

Over which ring?

I'm pretty convinced that this edit gets it wrong. See § R ⊗Z R ≠ R ⊗R R ? above. —Quondum 04:51, 28 April 2015 (UTC)[reply]

But see an example at "Examples". The problem is that they are isomorphic as Q-vector space. The issue is this a ring structure. In any case, the "definition" is not the best place for the discussion of this example; "examples" is. Because of the nonuniqueness of the expression it is pretty tricky to show that tensor products are not isomorphic (whence, the plenty of examples in the section). -- Taku (talk) 12:00, 28 April 2015 (UTC)[reply]
I think that the example that claims that RZ RRR R is simply wrong. The tensor product over Z and Q are the same, but that is as far as it goes. Over a larger field than Q is not the same. R is an infinite-dimensional vector space over Q. In particular, the dimension of say R3 over R is 3, but over Q it is ∞. 2 is R-linearly dependent on 1, but it is Q-linearly independent of 1. I'm sorry that I do not have the detail knowledge to put this concisely, but I think you see that this is a complicated area? —Quondum 14:27, 28 April 2015 (UTC)[reply]
The question is only about dimension, since dimension completely determines whether two vector spaces are isomorphic. What is the dimension of R as Q-vector space? infinity, but which infinity? Ask the same question for RQ R. That important example also shows how misleading it could be if one looks at expressions (i.e., tensors) to understand tensor products. -- Taku (talk) 16:20, 28 April 2015 (UTC)[reply]
Okay, I see what you're saying. (Unimportant aside: the example claims Q-dimensionality of R as the continuum, which doesn't feel right; I thought it'd be countably infinite, as in . Whichever transfinite number it is, it is probably equal.)
So, we would need a way of making clearer why the isomorphism is useless. Perhaps The result might be isomorphic as Q-vector spaces, but not as R-vector spaces (only the one feels like an R-vector space). —Quondum 20:42, 28 April 2015 (UTC)[reply]

If I remember correctly the Q-dimension of R is the continuum (roughly because it cannot be countable and cannot be larger than that of R; by continuum hypothesis, we reach the answer. But of course you don't need CH with more work); see also Basis (linear algebra)#Related notions. I agree that, as isomorphism, it's pretty useless. But it does constitute a counterexample to the claim RZ R is different from RR R. As Q-vector space, a fortiori, as abelian group they are isomorphic. -- Taku (talk) 12:22, 29 April 2015 (UTC)[reply]

Okay, I understand the need for a counterexample, but what it stands as a counterexample to is not clearly stated. The tensor product is, initially, only defined as an abelian group, so at first blush, as a Z-vector space is significant, so this argument is relevant. But we have the conundrum that ⊗R : R × RRR R ≈ (R, +) is commutative, but ⊗Q : R × RRQ R ≈ (R, +) is not. (Also note that the former has a natural multiplication by reals, the latter only by rationals.) How do we capture this? The codomains are isomorphic, but the maps are distinct. How does one describe this so that there is an intuition of what is happening here? Perhaps there is an example over small finite rings that behaves in the same way? (Maybe using Z/6Z?) —Quondum 14:29, 29 April 2015 (UTC)[reply]
I thought about this and it feels this comes down to a philosophical question: what is a tensor product? Is it just some kind of module or is it a pair consisting of a module and universal map. Is an object in question a module or a pair? In the "examples" section, the attitude there is a tensor product is just a module so the calculation like for any finite abelian group G makes sense. If we insist that tensor products are pairs, this gets tricky since we have to explain what 0 means as a pair. If you adopt the "pair" picture, we cannot even compare RZ R and RR R, as you pointed out domains are different. So, the answer in that picture would be they are "incomparable" (not the same thing as "non-same") -- Taku (talk) 13:15, 30 April 2015 (UTC)[reply]
With the map (the pair), is it that the domains are different? Did I say that? I said that the induced structure on the codomain is different, but I'm not sure that is a strong argument for being "incomparable" (after all, one must always specify what structure is applicable in an isomorphism). I think of the definition as specifying of the pair you describe, not just the module. —Quondum 16:24, 30 April 2015 (UTC)[reply]
(believe me or not I've been thinking about this) Despite what I said above I think the "pair" perspective doesn't work in practice if it might appear to be a good idea in theory. How do you reconcile the computation like RR M = M where on the left there is a tensor product, on the right there is just a module. I think tensor products should work like direct sum or quotient module. There is a canonical map but that shouldn't count as part of data. in other words, in particular, when we are doing comparison, the ring over which tensor products is taken does not enter the discussion. -- Taku (talk) 00:45, 3 May 2015 (UTC)[reply]
Unless I am misinterpreting something, I'm getting what feels to me to be a clearer picture. You use the soft term "philosophical question", but it seems to me to be a matter of definition of what one is actually referring to – one of those dual uses of terminology that bedevil mathematics (and which, frustratingly, mathematicians are often very blind to, and cause all sorts of argument because of lack of distinguishing distinct concepts because they share notation or a name). In this case, we need to distinguish between the tensor product as a module, and the tensor product as a map from two modules onto a module. A bit like distinguishing the real numbers as a set and the real numbers as a ring. The two tensor products concerned are isomorphic as Q-vector spaces, but they are not the same as maps. As I read the definition ("... the tensor product ... is an abelian group together with a balanced product ..."), the balanced product is a map that is an integral part of the definition: the tensor product is a pair, (M, ⊗), and over Q and over R this yields two pairs of which the respective Ms are isomorphic as groups, but the respective maps ⊗ are not equal, and hence the pairs are not isomorphic mathematical objects. In your example, you seem to be are relying on inference of intended meaning, in this case that the '=' means isomorphism as modules. This, IMO, results from use of notation that implies that the module of the tensor product is meant, not the pair. This does not mean that one can translate this back into English as "the tensor products are isomorphic"; it would better be stated as "the modules that the tensor product generates are isomorphic as Q-(or even Z-)modules". Does this make sense? —Quondum 02:49, 3 May 2015 (UTC)[reply]
If not "philosophical", then maybe "by convention". I think I get your point; that it is necessary to distinguish a pair and a mere module, but that's why I said "in practice". It seems to be that, as far as computations go in practice, the canonical map ⊗ gets forgotten/ignored. This is very similar to the situation with a quotient module; by definition, a quotient module comes with a quotient map but in practice the quotient map is usually not regarded as a part of the quotient; for example, that's how one can have the isomorphism , the group of second roots of unity (they are isomorphic since they are both cyclic of the same order). Here the isomorphism doesn't really care about the quotient map . In much the same way, in the computation like the canonical map doesn't have a role in the isomorphism. I don't have a specific opinion on whether this is a good practice or not, but nonetheless I'm pretty sure this is the typical attitude in practice. -- Taku (talk) 17:13, 3 May 2015 (UTC)[reply]
Coming from my perspective of definitions, a quotient (of modules, groups, rings, ...) is defined as the resultant set with the inferred structure on the set. Thus, the quotient map is not part of the definition of a quotient. We need to look at how the tensor product is defined. Reviewing Bourbaki, a sort of answer seems to emerge: that there are two distinct meanings of "tensor product". The first meaning is as the "tensor product of modules", which is defined as a quotient (and hence the canonical map is "forgotten"not involved). The second meaning is as the "tensor product of elements of modules", which refers to the canonical map itself. Perhaps we can sort it out by rewording (defining!) everything (several articles) so as to make this distinction clear? In this picture there is no "pair", only an object and a map, each called a tensor product in its own way. This seems to be what you are saying about the first meaning. Thus, we can say that , and simultaneously that (in general) . Quondum 18:29, 3 May 2015 (UTC)[reply]
Actually I wasn't thinking of the construction of a tensor product as a question at all; the construction is irrelevant for our discussion after all. I wanted to point out that it is not uncommon for the canonical map to be forgotten in the computation. I think you make the important point of "tensor product of elements". I understand the concept but again that notion simply "doesn't figure" in the computation of the tensor products of modules; maybe that's related to the tensor perspective, but the definition of tensor products of modules actually don't involve tensor products of elements at all. By definition, tensor products of modules is not made up of tensors; they just exist independent of its elements. It's very useful to have some concrete expressions in computation, but they are not part of the definition; that's why we have the important proposition at the definition section; it is very important to emphasize that what that proposition says is not the part of the definition. My view (and probably the prevailing one) is that the canonical map is something like a quotient map; it is used to formulate the universal property but that's not part of the module . Thus, when calculating , does not participate in the calculation, although it is often useful to use in order to carry out the calculation. Obviously it does make sense, from the theoretical point of view, to compare various canonical maps . But the "examples" concern modules not canonical maps. When we say two tensor products are isomorphic, they don't refer to universal properties; that should be a different kind of statements. I'm for adding more clarifying sentences. I think I get (never =, by the way) looks weird, but, from the module perspective, it's correct (and the isomorphism says nothing about canonical maps, rightly or not). This is also not weird if you can (and perhaps should) accept the perspective that tensor products of modules don't refer to their elements. -- Taku (talk) 20:38, 3 May 2015 (UTC)[reply]
Sorry, I was thinking back-to-front, agreed that the map is not part of the definition. I hope I've corrected my previous post suitably. Yes, I meant ≈, but got lazy when "\equiv" gave something else (after all, people do sometimes use '=' to mean 'isomophic to'). Even so, it is a short-cut: the type of isomorphism should be specified. On "tensor products of modules is not made up of tensors", it seems to be that the elements of the tensor product of modules are tensors, in the sense that they are retrospectively named that way (if we take any linear combination of the tensor product of elements to be a tensor). Bourbaki defines the tensor product of elements to be an element of the tensor product of modules via the canonical map: "the element of EA F which is the canonical image of the element (x, y) ... is denoted by xy and called the tensor product of x and y". —Quondum 21:14, 3 May 2015 (UTC)[reply]
Yes, as a matter of notation, it is convenient to write the image of (x, y) under as, guess what, , as you said. But it is an unjustified leap to claim this gives any "intrinsic" meaning to tensor products of elements. One "could" but that choice would constitute an unjustified original research. The approach here is to just cook up some modules satisfying the universal property; that is it! we don't give any answer to the fundamental question "what is a tensor?". In fact, the article never even defines a tensor at all (rightly so in my view). Since the exposition in the "examples" section is fairly explicit about the type of isomorphisms, I still don't think there is any issue. -- Taku (talk) 16:09, 4 May 2015 (UTC)[reply]
No issue, per se. What I think would be useful is to note that the isomorphism of tensor products of modules does not imply equivalence of tensor products of elements of those modules, otherwise many could make the same mistake that I did. I was using "tensor" as a convenient label; I understand that it is nonstandard (hence my "if we take ..."). My statement still holds: the tensor product of elements is, by its definition, an element of the tensor product of modules. —Quondum 18:18, 4 May 2015 (UTC)[reply]

Property 3 differs slightly from bilinearity

Please, either

1. Give more detail for the reasoning in footnote [1]:

φ(m · r, n) = φ(m, r · n) = r · φ(m, n)
implies that φ(m, n) = 0 when R is the quarternions.

or

2. Change the reasoning to: The scalar multiplication

r · (mn) := (m · r) ⊗ n
cannot be well-defined for noncommutative R, because for some r, sR there is
rs
and then the representatives (m · s, n) and (m, s · n) do not map to the same coset
(m · s) ⊗ n = m ⊗ (s · n)
since
r · ((m · s) ⊗ n) = ((m · s) · r) ⊗ n = (m · (s r)) ⊗ n
is unequal to
r · (m ⊗ (s · n)) = (m · r) ⊗ (s · n) = ((m · r) · s) ⊗ n = (m · (r s)) ⊗ n

--Nomen4Omen (talk) 08:43, 29 May 2015 (UTC)[reply]