!"#$%&'#(%()!)*%+,&-.! /-(%/-(#01#)2-1!3(%( -4566)/7)16868 !"#$%&'$%(&)*"+,%$-" &+.$/'&0&%+!1'&"&"/*+2,")*'/&3*" +*#"4!),'*'&.+&)"5!%,5/1'$+&)*" 9:;<:=4;6)9>4?=<6@>A6 *?:B=69C)DEFG !"#$%&'#()*+,*-%./&01% ,02!.()*+,*30(4%30(#2' 05677*38*97:7: !"#$%&!#'()(*!&*+#(")%&,(-(.(,/# 0"'#&0"1#!2#30,&(4%*#!)%&# 4!335,0,()%#*%3(&("6* 1;<=;>7?*1@55A>=7=@;: -A?B>7CAD*EFGH PODACI O MENTORU I CˇLANOVIMA KOMISIJE MENTOR: dr Zoran Petrovic´ vanredni profesor, Univerzitet u Beogradu, Matematicˇki fakultet CˇLANOVI KOMISIJE: dr Aleksandar Lipkovski redovni profesor, Univerzitet u Beogradu, Matematicˇki fakultet dr Zˇarko Mijajlovic´ redovni profesor, Univerzitet u Beogradu, Matematicˇki fakultet dr Branko Malesˇevic´ vanredni profesor, Univerzitet u Beogradu, Elektrotehnicˇki fakultet Datum odbrane: O DELITELJIMA NULE, INVERTIBILNOSTI I RANGU MATRICA NAD KOMUTATIVNIM POLUPRSTENIMA REZIME Poluprsten sa nulom i jedinicom je algebarska struktura, koja generalisˇe prsten. Naime, dok prsten u odnosu na sabiranje cˇini grupu, poluprsten cˇini samo monoid. Nedostatak oduzimanja cˇini ovu strukturu znatno tezˇom za istrazˇivanje od prstena. Predmet izucˇavanja u ovoj tezi predstavljaju matrice nad komutativnim poluprstenima (sa nulom i jedinicom). Motivacija za istrazˇivanje je sadrzˇana u pokusˇaju da se ispita koje se osobine za matrice nad komutativnim prstenima mogu prosˇiriti na matrice nad komutativnim poluprstenima, a takodje, sˇto je tesno povezano sa ovim pitanjem, kako se svojstva modula nad prstenima prenose na polumodule nad poluprstenima. Izdvajaju se tri tipa dobijenih rezultata. Najpre se prosˇiruju poznati rezultati, koji se ticˇu dimenzije prostora n-torki eleme- nata iz nekog poluprstena na drugu klasu poluprstena od do sada poznatih i ispravljaju neke gresˇke u radu drugih autora. Ovo je pitanje u tesnoj vezi sa pitanjem invertibilnosti matrica nad poluprstenima. Drugi tip rezultata se ticˇe ispitivanja delitelja nule u poluprstenu svih matrica nad ko- mutativnim poluprstenima i to posebno za klasu inverznih poluprstena (to su poluprsteni u kojima postoji neka vrste uopshtenog inverza u odnosu na sabiranje). Zbog neposto- janja oduzimanja, ne mozˇe se koristiti determinanta, kao sˇto je to u slucˇaju matrica nad komutativnim prstenima, ali, zbog cˇinjenice da su u pitanju inverzni poluprsteni, moguc´e je definisati neku vrstu determinante u ovom slucˇaju, sˇto omoguc´ava formulaciju odgovo- rajuc´ih rezultata u ovom slucˇaju. Zanimljivo je da se za klase matrica za koje se dobijaju rezultati, levi i desni delitelji nule mogu razlikovati, sˇto nije slucˇaj za komutativne prstene. Trec´i tip rezultata ticˇe se pitanja uvodjenja novog ranga za matrice nad komutativnim poluprstenima. Za ovakve matrice vec´ postoji niz funkcija ranga, koje generalisˇu pos- tojec´u funkciju ranga za matrice nad poljima. U ovoj tezi je predlozˇena nova funkcija ranga, koja je bazirana na permanenti, koju je moguc´e definisati i za poluprstene, za raz- liku od determinante, a koja ima dovoljno dobra svojstva da se tako definishe rang. Kljucˇne recˇi: komutativni prsteni, poluprsteni, pozitivna determinanta, delitelji nule, in- vertibilnost, rang, matrice Naucˇna oblast: Matematika Uzˇa naucˇna oblast: Algebra UDK broj: 512.558(043.3) ON ZERO DIVISORS, INVERTIBILITY AND RANK OF MATRICES OVER COMMUTATIVE SEMIRINGS ABSTRACT Semiring with zero and identity is an algebraic structure which generalizes a ring. Namely, while a ring under addition is a group, a semiring is only a monoid. The lack of substrac- tion makes this structure far more difficult for investigation than a ring. The subject of investigation in this thesis are matrices over commutative semirings (wiht zero and identity). Motivation for this study is contained in an attempt to determine which properties for matrices over commutative rings may be extended to matrices over commutative semirings, and, also, which is closely connected to this question, how can the properties of modules over rings be extended to semimodules over semirings. One may distinguish three types of the obtained results. First, the known results concerning dimension of spaces of n-tuples of elements from a semiring are extended to a new class of semirings from the known ones until now, and some errors from a paper by other authors are corrected. This question is closely related to the question of invertibility of matrices over semirings. Second type of results concerns investigation of zero divisors in a semiring of all matrices over commutative semirings, in particular for a class of inverse semirings (which are those semirings for which there exists some sort of a generalized inverse with respect to addition). Because of the lack of substraction, one cannot use the determinant, as in the case of matrices over commutative semirings, but, because of the fact that the semirings in question are inverse semirings, it is possible to define some sort of determinant in this case, which allows the formulation of corresponding results in this case. It is interesting that for a class of matrices for which the results are obtained, left and right zero divisors may differ, which is not the case for commutative rings. The third type of results is about the question of introducing a new rank for matrices over commutative semirings. For such matrices, there already exists a number of rank functions, generalizing the rank function for matrices over fields. In this thesis, a new rank function is proposed, which is based on the permanent, which is possible to define for semirings, unlike the determinant, and which has good enough properties to allow a definition of rank in such a way. Keywords: commutative rings, semirings, positive determinant, zero divisors, invertibil- ity, rank, matrices Scientific field: Mathematics Narrow scientific field: Algebra UDC number: 512.558(043.3) Contents 1 Introduction 1 2 Preliminaries on semirings 4 3 Cardinality of bases in semilinear spaces over zerosumfree semirings 12 3.1 Definitions and previous results . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Cardinality of bases inVn . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3 Additional remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4 Zero divisors for matrices over commutative semirings 20 4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.2 Auxiliary results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.3 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5 The permanent rank of matrices 29 5.1 Rank functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.2 Permanent rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 References 45 Chapter 1 Introduction In 1894, Dedekind introduced the modernistic definition of the ideal of a ring. He took the family of all the ideals of a ring R, Id(R), and defined on it the sum (+), and the product (·). He found that the system (Id(R),+, ·) satisfies most of the rules that the system (R,+, ·) satisfied, but the algebraic system (Id(R),+, ·) was not a ring, because it was not a group (under addition), it was only a commutative monoid. The system (Id(R),+, ·) had all the properties of an important algebraic structure, which was called later a semiring. In 1934, H. S. Vandiver published a paper about algebraic system consisted of a nonempty set S with two binary operations, addition (+) and multiplication (·), such that S was a semigroup under an addition (+) and a multiplication (·). The system (S ,+, ·) obey both distributive laws but it did not obeyed the cancelation law of addition. This system was ring-like but it was not a ring. Vandiver called this system a semiring. But Vandiver was later informed by R. Brauer that Dedekind introduced this concept, but it was not by the same name semiring. Vandiver observed in his papers, that there are semirings can be embedded into rings. The structure of semirings was later investigated by many authors in the 1950’s. These authors have investigated various aspects of the algebraic theory of semirings including embedding of semirings into richer semirings, and other details. In recent years, semir- ings proved to be an important tool in many areas of applied mathematics and Computer 1 Science. A semiring is similar to a ring, where the difference between semirings and rings is that there are no additive inverses in semirings. Therefore, all rings are semirings. For examples of semirings which are not rings are the non-negative reals R+, the non-negative rationals Q+, and the non-negative integers Z+ with usual addition and multiplication. Matrix theory over various semirings is an important subject, so it has attracted the attention of many researchers working both in theoretical and applied mathematics during the last decades. This thesis is organized as follows. Some basic definitions and examples of semirings and related notions (linear indepen- dence, semilinear spaces, ideals, annihilators) are given in Chapter 2. Chapter 3 is devoted to the discussion of the cardinality of bases in semilinear spaces of n-dimensional vectors over commutative zerosumfree semirings. It is not generally true that every basis has n elements. Some examples are given and the correct condition that any basis for this type of semirings has n elements is presented. Results from this chapter were published in [9]. In Chapter 4, we investigate zero divisors for matrices over commutative additively inverse semirings with zero 0 and identity 1. It is known that a matrix over a commutative ring is a zero divisor if and only if its determinant is a zero divisor. Since determinant is impossible to define for matrices over semirings, one needs to make some changes. It is possible to define a function similar to the determinant for matrices over additively inverse semirings. For matrices satisfying an additional condition on its elements this function allows us to determine whether a matrix is a zero divisor. It is interesting that in the case of commutative semirings it is not true that a square matrix is a left zero divisor if and only if it is a right zero divisor, which is true for commutative rings. These results are contained in [10]. Finally, Chapter 5 introduces a new type of rank, which we call the permanent rank of matrices over commutative semirings. Namely, for semirings there are a number of rank functions already defined, generalizing various aspects of the rank function for matrices 2 over a field. The permanent of a matrix is possible to define for matrices over commutative semirings and it has good enough properties to establish some results analogous to those for matrices over rings. Some examples are given to illustrate these results. 3 Chapter 2 Preliminaries on semirings In this chapter we give some basic definitions concerning semirings and matrices over semirings. For more information about these (and other) notions, we refer the reader to [3, 4, 5, 6, 7, 11, 12, 1, 13, 17]. Definition 1. A semiring with zero 0 and identity 1, L = (L,+, ·, 0, 1) is an algebraic structure satisfying the following axioms: (i) (L,+, 0) is a commutative monoid; (ii) (L, ·, 1) is a monoid; (iii) for all r, s, t ∈ L: r · (s + t) = r · s + r · t and (s + t) · r = s · r + t · r; (iv) for all r ∈ L: r · 0 = 0 = 0 · r; (v) 1 ! 0. Definition 2. A semiring L is commutative if (L, ·, 1) is a commutative monoid. Definition 3. A semiring L is called zerosumfree (or antinegative) if from a + b = 0, for a, b ∈ L, it follows that a = b = 0. 4 Example 1. ([0, 1],+, ·), where [0, 1] is the unit interval of real numbers, is a semiring, where: a + b = max{a, b}, a · b = min{a, b}, or a + b = min{a, b}, a · b = max{a, b}, or even a + b = max{a, b}, and · = usual product of real numbers. Example 2. L = (R ∪ {−∞},+, ·, {−∞}, 0) is a semiring, where R is the set of all real numbers, a+b = max{a, b}, and a ·b stands for the usual addition in R for a, b ∈ R∪{−∞}. This semiring L = (R ∪ {−∞},+, ·, {−∞}, 0) is usually called a max-plus algebra or a schedule algebra. Example 3. For any nonempty set X, the system (P(X),∪,∩) consisting of the power set P(X) of X under the binary operations of ∪ and ∩ is a semiring, in particular it is a commutative zerosumfree semiring, where for A, B ∈ P(X), A ∪ B may be considered as an addition on P(X) and A ∩ B as a multiplication on P(X). The system (P(X),∩,∪) also is a commutative semiring but it is not zerosumfree semiring, because if A ∩ B = ∅ does not imply A = ∅ and B = ∅. Example 4. Let S be a nonempty set. Define on S , + by a + b = b and · by a · b = a for all a, b ∈ S . Then (S ,+, ·) is a semiring which is not commutative. Example 5. The set of non-negative integers Z+, non-negative rational numbers Q+, non- negative real numbers R+, under the usual addition and multiplication are familiar exam- ples of commutative zerosumfree semirings. Note that none of them is a ring. Example 6. For each positive integer n, the set Mn(S ) of all n×nmatrices over a semiring S is a semiring under the usual operations of matrix addition and multiplication. Definition 4. A semiring (S ,+, ·) is called additively inverse if (S ,+) is an inverse semi- group, i.e., for each x ∈ S there is a unique x′ ∈ S such that x = x + x′ + x and 5 x′ = x′ + x + x′. An additively inverse commutative semiring with zero and identity is a generalization of a commutative ring with identity. Definition 5. Let A = (A,+A, 0A) be a commutative monoid and L = (L,+, ·, 0, 1) is a commutative semiring. If an external multiplication • : L × A→ A such that: (i) for all r, s ∈ L, a ∈ A: (r · s) • a = r • (s • a); (ii) or all r ∈ L, a, b ∈ A: r • (a +A b) = r • a +A r • b; (iii) for all r, s ∈ L, a ∈ A: (r + s) • a = r • a +A s • a; (iv) for all a ∈ A: 1 • a = a; (v) for all r ∈ L, a ∈ A: 0 • a = r • 0A = 0A, is defined, thenA is called a left L-semimodule. One can analogously define the notion of right L-semimodules. Definition 6. Let L = (L,+, ·, 0, 1) be a commutative semiring. Then a semimodule over L is called an L-semilinear space. The elements of a semilinear space will be called vectors and elements of L scalars. Example 7. Let L = (L,+, ·, 0, 1) be a commutative semiring, X ! ∅ and A = LX = { f | f : X → L}. Then for all f , g ∈ LX we define: ( f +A g)(x) = f (x) + g(x), (for x ∈ X, r ∈ L) (r • f )(x) = r · f (x). We also define the zero element 0A as the function 0A : x *−→ 0. ThenA = (LX,+A, 0A) is an L-semilinear space. Definition 7. Let A be an L-semilinear space. For λ1, . . . , λn ∈ L, a1, . . . , an ∈ A, the element λ1a1 + · · · + λnan ∈ A is called a linear combination of vectors a1, . . . , an ∈ A. 6 Definition 8. Let A be an L-semilinear space. Vectors a1, . . . , an ∈ A, where n ≥ 2 are called linearly independent if none of these vectors can be expressed as a linear combination of others. Otherwise, we say that vectors a1, . . . , an are linearly dependent. A single non-zero vector is linearly independent. An infinite set of vectors is linearly independent if any finite subset of it is linearly independent. Example 8. Let A = Ln be the L-semilinear space of n-dimensional vectors over a com- mutative semiringL, whereL = (L,+, ·, 0, 1) = ([0, 1],+, ·, 0, 1) , where a+b = max{a, b}, a · b = min{a, b}. (a) The following vectors are linearly independent: e1 =  1 0 ... 0  , e2 =  0 1 ... 0  , . . . , en =  0 0 ... 1  . To show that, we suppose (∃c1, c2, . . . , cn−1 ∈ [0, 1])  1 0 ... 0  = c1 ·  0 1 ... 0  + . . . + cn−1 ·  0 0 ... 1  1 0 ... 0  =  0 min{c1, 1} ... 0  + . . . +  0 0 ... min{cn−1, 1}  7 we note that  1 0 ... 0  !  0 min{c1, 1} ... min{cn−1, 1}  because 1 ! 0 . We do the same work for other vectors e2, e3, . . . , en , we find that none of them can be expressed as a linear combination of others. Hence, the vectors e1, e2, . . . , en are linearly independent. (b) The following vectors: f1 =  0 1 1 ... 1  , f2 =  1 0 1 ... 1  , . . . , fn =  1 1 1 ... 0  . are linearly independent. To show that we suppose (∃c1, c2, . . . , cn−1 ∈ [0, 1])  0 1 ... 1  = c1 ·  1 0 ... 1  + . . . + cn−1 ·  1 1 ... 0  8  0 1 ... 1  =  min{c1, 1} 0 ... min{c1, 1}  + . . . +  min{cn−1, 1} min{cn−1, 1} ... 0  =  c1 0 ... c1  + . . . +  cn−1 cn−1 ... 0  =  max{c1, . . . , cn−1} max{c2, . . . , cn−1} ... max{c1, cn−2}  0 = max{c1, . . . , cn−1} ⇒ c1 = · · · = cn−1 = 0 , but we see that 1 = max{c2, . . . , cn−1} and this means that one of c2, . . . , cn−1 is equal to 1. So, this is contradiction because we found c1 = · · · = cn−1 = 0. So, f1 can not be expressed as a linear combination of f1, . . . , fn. We apply the same work for other vectors f2, . . . , fn we find that none of them can be expressed as a linear combination of others. Hence the vectors f1, . . . , fn are linearly independent. Example 9. Let (S ,+, ·) = ([0, 1],max,min) be a semiring, where [0, 1] is the unit interval of real numbers, and a + b = max{a, b}, a · b = min{a, b}. The vectors from [0, 1]n : a1 = (a, 0, 0, . . . , 0), a2 = (0, a, 0, . . . , 0), · · · · · · · · · · · · · · · · · · · an = (0, 0, 0, . . . , a), 9 where a ∈ (0, 1), are linearly independent (it is easy to see that). But the vectors a1, . . . , an, an+1, where an+1 = (a, a, . . . , 0) are linearly dependent since an+1 = a1 + a2. We need notions of a generating set and a basis. Definition 9. A nonempty subset B of vectors from A is called a generating set if every vector from A is a linear combination of vectors from B. Definition 10. A linearly independent generating set is called a basis. The notion of an invertible matrix for matrices over commutative semirings is com- pletely analogous to the notion of invertible matrices for commutative rings. Definition 11. Let S be a semiring. A matrix A in Mn(S ) is right invertible (resp. left invertible) if AB = In (resp. BA = In) for some B ∈ Mn(S ). In this case the matrix B is called a right inverse (resp. left inverse) of A in Mn(S ). The matrix A in Mn(S ) is called invertible if it is both right and left invertible. Definition 12. We define the set U(L) of (multiplicatively) invertible elements from L by: U(L) := {a ∈ L| (∃b ∈ L) (a · b = b · a = 1)}. Definition 13. An element r ∈ L is additively irreducible if from r = a + b, it follows that r = a or r = b. The notion of an ideal and its annihilator is important for the study of permanent rank in the last chapter. Definition 14. A subset I of a semiring S is a right (resp. left) ideal of S if for a, b ∈ I and s ∈ S , a + b ∈ I and ar ∈ I (resp. ra ∈ I); I is an ideal if it is both a right and a left ideal. 10 Definition 15. Let S be a commutative semiring, and let I be an ideal of S . The annihi- lator of an ideal I, denoted by AnnS (I), is the set of all elements x in S such that for each y in I, x · y = 0, i.e., AnnS (I) " {x ∈ S | (∀y ∈ I) x · y = 0}. It is clear that annihilator is also an ideal of S . 11 Chapter 3 Cardinality of bases in semilinear spaces over zerosumfree semirings 3.1 Definitions and previous results We begin this chapter with an example. Example 10. Let L = (L,+, ·, 0, 1) be a commutative semiring. For each n ≥ 1, let Vn(L) := {(r1, . . . , rn)T : ri ∈ L, 1 ≤ i ≤ n}. Then, Vn(L) becomes a L-semilinear space if the operations are defined as follows: (r1, . . . , rn)T + (s1, . . . , sn)T = (r1 + s1, . . . , rn + sn)T ; r • (r1, . . . , rn)T = (r · r1, . . . , r · rn)T , for all r, ri, s j ∈ L. We denote this semilinear space by Vn and we call it the semilinear space of n-dimensional vectors over L. The following question naturally arises: if a semilinear space has a basis, is it true that all bases have the same cardinality? For the discussion of the problem for a particular class of semirings (join-semirings), see [19]. In general, this is not true, as the following example shows. 12 Example 11. Let L = (L,⊕,1, 0, 1) be a commutative zerosumfree semiring, where L is the non-negative integers Z+, and ⊕, 1 are defined as: for a and b in L, a ⊕ b = gcd{a, b} and a1b = lcm{a, b}, and gcd(resp. lcm) denotes the greatest common divisor (resp. least common multiple) of a and b. We also put a1 0 = 0. Then in L-semilinear spaceV2, the vectors  10  ,  01  , the vectors  20  ,  30  ,  02  ,  03  , and the vectors  20  ,  30  ,  01  , are bases ofV2, but they have not the same number of elements — the first basis has two elements, the second basis has four elements, and the third basis has three elements. We show that these vectors form bases. We have the first basis   10  ,  01   . We will show that set is linearly independent, so suppose (∃m ∈ Z+)  10  = m!  01  10  =  m! 0m! 1  =  0m  . We note that (∀m ∈ Z+)  10  !  0m  , also (∀n ∈ Z+)  01  ! n!  10  . 13 Hence the vectors  10  ,  01  , are linearly independent. Now, we will show that vectors span V2: note that for any vector  xy  ∈ V2,  xy  = x 1  10 " y 1  01  , so, the vectors  10  ,  01  , span V2 , that means that the set   10  ,  01   is a generating set. Since that set of vectors is linearly independent and generating set, so it is a basis of V2. Now, will show that these vectors  20  ,  30  ,  02  ,  03  , are linearly independent: suppose that (∃m, n, p ∈ Z+)  20  = m!  30 " n!  02 " p!  03  20  =  m! 3n! 2" p! 3  this mean that 2 = lcm{m, 3}, but this is impossible for any m ∈ Z+ since 3 ! 2. By the same way, we find that none of other vectors can be represented by a linear combination of the others. So they are linearly independent. To show that these vectors span V2 note that  10  =  20 "  30  14 and  01  =  02 "  03  So  20  ,  30  ,  02  ,  03  span (generate) every vector inV2 . Since that vectors are linearly independent and span V2 , then it form a basis ofV2 . By the same way, we can find that the vectors 20  ,  30  ,  01  form a basis ofV2 . Let us concentrate our attention to the case of the space of n-dimensional vectors Vn. In [14], the authors claim that the following result is true: In L-semilinear space Vn, where L is a zerosumfree semiring, each basis has the same number of elements if and only if 1 is an additive irreducible element (see Theorem 3.1 in that paper). Alas, this result is not true. For example, in the case when L = Q+ (by Q+ we denote the non-negative rational numbers with the usual addition and multiplication, as mentioned before), all bases inVn have n elements, while 1 obviously is not an additive irreducible element. We show how to correct this result and we point out that everything depends on the 1-dimensional case. 3.2 Cardinality of bases inVn We assume that a semiring L is zerosumfree. In this section we prove the main result in the following theorem. Theorem 1. For every n ≥ 1, every basis inVn has n elements if and only if every basis inV1 has 1 element. 15 Of course, one only needs to prove the non-trivial part. We begin with the following lemma. Lemma 1. Suppose that every basis in V1 has one element. Then, if a1, . . . , am ∈ L are such that a1 + · · · + am = 1, it follows that at least one of ai is invertible. Proof. We prove this by induction on m. The claim is true for m = 1. Suppose that it is true for m(≥ 1); we prove it for m + 1. So, suppose that a1 + · · · + am+1 = 1. (3.2.1) Then {a1, . . . , am+1} is a generating set for V1. Since every basis for V1 has 1 element, this set is not a basis. So, at least one of the elements is a linear combination of others. Suppose, for example, that am+1 is a linear combination of others, so am+1 = λ1a1 + · · · + λmam, (3.2.2) for some λ1, . . . , λm ∈ L. Substituting this expression for am+1 into equation (3.2.1) we get (1 + λ1)a1 + · · · + (1 + λm)am = 1. (3.2.3) By induction hypothesis, at least one of (1+λi)ai is invertible. It follows that ai is invertible also, and we are done. ! Proof of Theorem 1. We assume that every basis of V1 has 1 element. Suppose that A1, . . . , Am is a basis forVn with Aj =  a1 j a2 j ... an j  , for ai j ∈ L. 16 We have the canonical basis forVn: e1, . . . , en given by e1 =  1 0 ... 0  , e2 =  0 1 ... 0  , . . . , en =  0 0 ... 1  . Since A1, . . . , Am is also a basis, the vectors from the canonical basis may be expressed as linear combinations of A1, . . . , Am. As in ordinary linear algebra, this may be expressed as a product of matrices: A · B = In, where vectors A1, . . . , Am are columns of the matrix A, and In is the identity matrix of order n whose columns are exactly vectors e1, . . . , en. From this equation, by looking at the first column of the product, we get a11b11 + a12b21 + · · · + a1mbm1 = 1 a21b11 + a22b21 + · · · + a2mbm1 = 0 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · an1b11 + an2b21 + · · · + anmbm1 = 0. From Lemma 1, it follows that at least one of a1kbk1 is invertible. We may assume without loss of generality that it is a11b11 that is invertible. Since L is zerosumfree, we conclude that a21b11 = · · · = an1b11 = 0. Since b11 ∈ U(L), it follows that a21 = · · · = an1 = 0. So, A1 = a11e1. We proceed to the second column of the product of matrices A and B. 17 a11b12 + a12b22 + · · · + a1mbm2 = 0 a22b22 + · · · + a2mbm2 = 1 · · · · · · · · · · · · · · · · · · · · · · · · · an2b22 + · · · + anmbm2 = 0. As before, from Lemma 1, it follows that a2kbk2 ∈ U(L) for some k. For simplicity of notation, let us assume that a22b22 ∈ U(L). So, as in the previous case, we get that A2 = a22e2. We perform this process for A1, . . . , Al, where l = min{m, n} and we get that, for some injective function pi : {1, . . . , l} → {1, . . . , n}, Ak = api(k)pi(k)epi(k) for 1 ≤ k ≤ l and api(k)pi(k) ∈ U(L). 1. If m < n, then l = m and let s be an element of {1, . . . , n} which is not in the image of pi (in this case pi cannot be onto). It is clear that es is not a linear combination of A1, . . . , Am, so these vectors cannot form a basis. Therefore, m ≥ n. 2. If m > n, we obtain that A1, . . . , An already form a basis. Therefore, we must have m = n. ! 3.3 Additional remarks We have proved that is enough to assume that every basis in V1 has 1 element in order to conclude that for any n every basis in Vn has n elements. One may express the fact that every basis in V1 has 1 element in various equivalent ways and one of them is the following: every basis inV1 has 1 element if and only if from a + b = 1, where a, b ∈ L it follows that a ∈ U(L), or b ∈ U(L). Instead of using this condition as a necessary and sufficient condition that the cardinality of every basis ofVn is n, we have decided to 18 present the result in the form of Theorem 1 in order to emphasize that everything depends onV1. 19 Chapter 4 Zero divisors for matrices over commutative semirings It is known that a square matrix A over a commutative ring R with identity is a left (right) zero divisor in Mn(R) if and only if the determinant of A is a zero divisor in R (see [2]). Additively inverse commutative semirings with zero 0 and identity 1 are a generalization of commutative rings with identity. In this chapter, we present some results for matrices over this type of semirings which generalize the above result for matrices over commuta- tive rings. For the results concerning invertibility of matrices over semirings, we refer the reader to [13, 15, 16, 18]. 4.1 Preliminaries In this section, we collect only the necessary notions for the presentation of the main result in the last section. Definition 16. An element x of a semiring (S ,+, ·) with zero 0 (identity 1) is said to be additively (multiplicatively) invertible if x + y = y + x = 0 (x · y = y · x = 1) for some unique y ∈ S . 20 Definition 17. A square matrix A over a commutative semiring S is called a left zero divisor in Mn(S ) if AB = O for some nonzero matrix B ∈ Mn(S ). Similarly, A is called a right zero divisor in Mn(S ) if CA = O for some nonzero matrix C ∈ Mn(S ). Definition 18. [15] Let Sn be the symmetric group of degree n ≥ 2, An the alternating group of degree n, and Bn = Sn\An, that is, An = {σ ∈ Sn : σ is an even permutation}, Bn = {σ ∈ Sn : σ is an odd permutation}. For A ∈ Mn(S ), the positive determinant and the negative determinant of A are defined respectively, as follows: det+A = ∑ σ∈An  n∏ i=1 aiσ(i)  , det−A = ∑ σ∈Bn  n∏ i=1 aiσ(i)  . We can see that An = {σ−1 : σ ∈ An} and Bn = {σ−1 : σ ∈ Bn}, det+In = 1 and det−In = 0 and for A ∈ Mn(S ), det+At = det+A, det−At = det−A(see [6]). Proofs of the following lemma may be found in [15]. Lemma 2. For distinct i, j ∈ {1, . . . , n},σ *−→ (σ(i),σ( j))σ is a bijection from An onto Bn. We also need the following propositions. Proposition 1. If (S ,+, ·) is an additively inverse semiring, then for all x, y ∈ S , (i) (x′)′ = x, (ii) (x + y)′ = y′ + x′ , (iii) (xy)′ = x′y = xy′, (iv) x′y′ = xy. 21 Proposition 2. If (S ,+, ·) is an additively inverse semiring with zero 0 and x, y ∈ S are such that x + y = 0, then y = x′. Proposition 3. (i) (x′)′ = x, (ii) (x + y)′ = y′ + x′ , (iii) (xy)′ = x′y = xy′, (iv) x′y′ = xy. Proposition 4. If (S ,+, ·) is an additively inverse semiring with zero 0 and x, y ∈ S are such that x + y = 0, then y = x′. It is known that a commutative semiring S may be embedded into a ring if and only if it satisfies the additive cancellation law: if a + x = b + x, it follows that x = y. Example 12. There are additively inverse semirings which cannot be embedded into a ring. For example, let (S ,+, ·) = ([0, 1],⊕,1), where [0, 1] = {x ∈ R : 0 ≤ x ≤ 1}, x ⊕ y := max{x, y} and x 1 y := min{x, y}. We know that this is an inverse semiring (see, e.g. [15]). However, since 1/2 + 1 = 1 = 1/3 + 1, and 1/2 ! 1/3, this semiring does not satisfy additive cancellation law, so it cannot be embedded into a ring. 4.2 Auxiliary results In the following, all semirings will be inverse semirings with zero 0 and identity 1. In particular, in Mn(S ) there is the identity matrix In, all of whose diagonal elements equal to 1, and all non-diagonal elements equal to 0. It is clear that A · In = In · A = A for every A ∈ Mn(S ). Let S be an additively semiring, a ∈ S and n ≥ 0, a non-negative integer. We define a(n) as follows: a(n) :=  a, if n is even a′, if n is odd. 22 It is easy to check that (a(n))(m) = a(n+m) and that a(n)a(m) = a(n+m). Using this notation, we define the determinant for matrices over additively inverse semirings. Definition 19. Let S be an additively inverse semiring and A ∈ Mn(S ). Then, we define d˜et(A) as follows: d˜et(A) := det+(A) + (det−(A))′. Note that, if we put sgn(σ) = 0, for even pertumations, and sgn(σ) = 1, for odd permuta- tions, we have that d˜et(A) = ∑ σ∈Sn  n∏ i=1 aiσ(i) sgn(σ) , which is completely analogous to the usual expansion of the determinant. We can also define the adjoint matrix of a given matrix in Mn(S ) as follows. Definition 20. Let S be an additively inverse semiring and A ∈ Mn(S ). For i, j ∈ {1, . . . , n}, let Mi j(A) ∈ Mn−1(S ) be a matrix obtained from the matrix A by deleting the ith row and jth column from this matrix. Then a˜dj(A) ∈ Mn(S ) is defined as: a˜dj(A)i j = (d˜et(M ji(A)))(i+ j). This definition is, as in the case of d˜et(A), completely analogous to the usual definition of the adjoint matrix. It is easy to see that the Laplace expansion with respect to any row holds in our case for d˜et (the proof is completely analogous to the usual case, so we omit it). Theorem 2. If S is an additively inverse semiring, A ∈ Mn(S ) and i ∈ {1, . . . , n}, one has: d˜et(A) = n∑ j=1 ai j(d˜et(Mi j(A)))(i+ j). Example 13. Let us assume that ab is additively invertible and let A ∈ M2(S ) be the matrix A = a ba b  Then d˜et(A) = ab + ba′ = ab + (ab)′ = 0. 23 Keeping this example in mind, one should not be surprised that the following theorem holds. Theorem 3. Let S be an additively inverse semiring and A ∈ Mn(S ) be a square matrix such that for all i, j, k, j ! k the elements ai jaik are additively invertible. If, in addition to that, A has two equal rows, then d˜et(A) = 0. Proof. In addition to the simplest Laplace expansion mentioned in 2, the more general expansion with respect to any k rows also holds. We use this expansion with respect to the two equal rows, and since any 2 × 2 submatrix formed from these two rows has zero determinant (see the previous example), we conclude that all the terms in the expansion are zero, therefore d˜et(A) = 0. Alternatively, if the equal rows are rth and sth row, we can first expand d˜et(A) with respect to the rth row, and then expand all the terms with respect to the sth row. Then all the terms in this final expansion can be collected into pairs which cancel each other, since these two rows are equal and the corresponding elements are additively invertible. ! Example 14. It is not necessarily the case that the determinant of a matrix with two equal rows is zero. For example, let (S ,+, ·) = ([0, 1],⊕,1), the semiring from Example 1. Then for A = 1 11 1  we have d˜et(A) = 1 · 1 + 1 · 1′ = 1 + 1 = 1. The following theorem is vital for our main results. Theorem 4. Let S be an additively inverse semiring, A ∈ Mn(S ) is such that ai jaik are additively invertible for all i, j, k, such that j ! k. Then A · a˜dj(A) = d˜et(A)In, where In ∈ Mn(S ) is the identity matrix. 24 Proof. The proof is standard. The (i, k)th component of the product is: (A · a˜dj(A))ik = n∑ j=1 ai ja˜dj(A)) jk = n∑ j=1 ai j(d˜et(Mk j(A)))(k+ j). If i = k, we have: (A · a˜dj(A))ii = n∑ j=1 ai ja˜dj(A)) ji = n∑ j=1 ai j(d˜et(Mi j(A)))(i+ j) = d˜et(A). If i ! k then we also have an expansion of the determinant of a matrix, but in this case this matrix has equal ith and kth row. So, from Theorem 3, we conclude that this sum is equal to zero. ! Remark 1. One can check that the equality A · a˜dj(A) = d˜et(A)In need not be true for all matrices A ∈ Mn(S ). One can check the same matrix as in Example 3. 4.3 Main results In this section we prove the main results. Theorem 5. Let S be an additively inverse semiring with zero 0 and identity 1 and A ∈ Mn(S ) is such that ai jaik is additively invertible for all i, j, k, j ! k. If A is a right zero divisor, then d˜et(A) is a zero divisor. Proof. Since A is a right zero divisor, there exists a non-zero matrix B ∈ Mn(S ) such that B · A = O. If we multiply this equality on the right by a˜dj(A), we get B · A · a˜dj(A) = O, and taking into account results from Theorem 4, we get Bd˜et(A) = O. Since B ! O, there is a component bi j ! 0, such that bi jd˜et(A) = 0, so d˜et(A) is a zero divisor. ! 25 Theorem 6. Let S be an additively inverse semiring with zero 0 and identity 1 and A ∈ Mn(S ) is such that ai jaik is additively invertible for all i, j, k, j ! k. If d˜et(A) is a zero divisor, then A is a left zero divisor. Proof. Since d˜et(A) is a zero divisor, there exists x ∈ S such that d˜et(A) · x = 0. If ai j · x = 0 for all i, j, then A · (xIn) = O and we are done. So, let us assume that there exist i, j such that ai j · x ! 0. Since d˜et(A) · x = 0, there must exist maximal r such that 1 ≤ r ≤ n − 1 and x annihilates all determinants of submatrices of A of order r + 1, while there exists a submatrix C of order r of A such that d˜et(C) · x ! 0. We may assume without loss of generality (and in order to simplify notation) that it is the submatrix formed by the first r rows and columns of matrix A. Let us denote by B, the submatrix of A formed by the first r + 1 rows and columns of A. We claim that the following product is equal to zero. a11 a12 · · · a1r a1,r+1 · · · a1n a21 a22 · · · a2r a2,r+1 · · · a2n ... ... . . . ... ... . . . ... ar1 ar2 · · · arr ar,r+1 · · · arn ar+1,1 ar+1,2 · · · ar+1,r ar+1,r+1 · · · ar+1,n ... ... . . . ... ... . . . ... an1 an2 · · · anr an,r+1 · · · ann  ·  (Mr+1,1(B))(r+2)x (Mr+1,2(B))(r+3)x ... (Mr+1,r+1(B))(2r+2)x 0 ... 0  If 1 ≤ i ≤ r we have r+1∑ j=1 ai j(Mr+1, j(B))(r+1+ j) = 0, since this is just the determinant of a matrix of order r + 1 having the same ith and r + 1st row (we replace r + 1st row of matrix B with its ith row). For i = r + 1, we have r+1∑ j=1 ar+1, j(Mr+1, j(B))(r+1+ j)x = d˜et(B)x = 0, 26 since x annihilates all determinants of submatrices of A of order larger than r. The same conclusion holds for i > r + 1, since the corresponding sum is just determinant (or deter- minant’) of a submatrix of order larger than r of matrix A. On the hand, (Mr+1,r+1(B))(2r+2)x = d˜et(C)x ! 0 by assumption, so this column is not equal to zero. If we add n − 1 zero columns to this one, we obtain a matrix D ! O, such that A · D ! O and we conclude that A is a left zero divisor. ! Corollary 1. Let A ∈ Mn(S ) be a matrix with entries in an additively inverse semiring S , such that ai jaik is additively invertible for all i, j, k, i ! j. Then, if A is a right zero divisor, then A is a left zero divisor. Proof. The proof follows directly from previous theorems. Namely, if A is a right zero divisor, then by Theorem 5, d˜et(A) is a zero divisor, so, by Theorem 6 A, is a left zero divisor. ! Remark 2. The set of left zero divisors in Mn(S ) may differ from the set of right zero divisors, even if the conditions concerning its components as in previous theorems hold. This is shown in the following example. Example 15. Let (S ,+, ·) = ([0, 1],⊕,1), the semiring from Example 1. The matrix A = 1 01 0  is a left zero divisor, but it is not a right zero divisor. Namely,1 01 0  · 0 01 1  = 0 00 0  so A is a left zero divisor. Let us show that A is not a right zero divisor. Suppose that B ∈ Mn(S ) is such that B · A = O. So, if B = x yz t  , 27 we get x yz t  · 1 01 0  = 0 00 0  . It follows that max{x, y} = x ⊕ y = 0 and max{z, t} = z ⊕ t = 0. We conclude that x = y = z = t = 0. So, B = O and A is not a right zero divisor. 28 Chapter 5 The Permanent Rank of Matrices over Commutative Semirings 5.1 Rank functions There are many essentially different rank functions for matrices over semirings. All the rank functions coincide for matrices over fields, but they are essentially different for ma- trices over semirings. Also, these ranks do not coincide with the usual rank function even if a semiring S is a field. We collect well-known concepts of them. For more detailed exposition, see [8]. Definition 21. The factor rank of the matrix A ∈ Mm×n(S ), A ! O is the smallest positive integer k such that there exist matrices B ∈ Mm×k(S ), and C ∈ Mk×n(S ) such that A = BC. The factor rank of A is denoted by rank(A). The factor rank of the zero matrix O is assumed to be 0. Definition 22. The term rank of the matrix A ∈ Mm×n(S ) is the minimum number of lines (rows or columns) needed to include all nonzero elements of A. The term rank of A is denoted by t(A). Definition 23. The zero-term rank of A ∈ Mm×n(S ) is the minimum number of lines (rows 29 or columns) needed to include all zero elements of A. The zero-term rank of A is denoted by z(A). Definition 24. The row (resp. column) rank of A ∈ Mm×n(S ) is the dimension of the linear span of the rows (resp. columns) of A. The row (resp. column) rank of A is denoted by r(A) (resp. c(A)). Definition 25. The spanning row (resp. column) rank of A ∈ Mm×n(S ) is the minimum number of rows (resp. columns) that span all rows (resp. columns) of A. The spanning row (resp. spanning column) rank of A is denoted by sr(A) (resp. sc(A)). Definition 26. The matrix A ∈ Mm×n(S ) has maximal raw (resp. column) rank k if it has k linearly independent rows (resp. columns) and any (k + 1) rows (resp. columns) are linearly dependent. The maximal row (resp. column) rank of A is denoted by mr(A) (resp. mc(A)). Now, we can define the usual rank of a matrix A over a field F as following: Definition 27. The rank of a matrix A ∈ Mm×n(F) is the number of linearly independent rows or columns of A. Example 16. Let (S ,+, ·) = (Z+,+, ·), and A =  3 4 0 0 0 3 0 0 3  ∈ M3(S ) Then, the term rank of A is equal to two (t(A) = 2), because the minimum number of lines needed to include all nonzero elements of A is equal to two. The zero-term rank of A is equal to three (z(A)=3), because the minimum number of lines needed to include all zero elements of A is equal to three. Now, to find the factor rank of A, we need to work a little. The factor rank of A is not equal to one, because we can not write A as product of two matrices, B ∈ M3×1(Z+) and C ∈ M1×3(Z+). We can explane that in the following steps. 30 Suppose we can write A as product of that matrices B ∈ M3×1(Z+) and C ∈ M1×3(Z+) as follows:  3 4 0 0 0 3 0 0 3  =  x y z  · ( a b c ) =  ax bx cx ay by cy az bz cz  . This corresponds to the system of equations: ax = 3 (5.1.1) bx = 4 (5.1.2) cx = 0 (5.1.3) ay = 0 (5.1.4) by = 0 (5.1.5) cy = 3 (5.1.6) az = 0 (5.1.7) bz = 0 (5.1.8) cz = 3 (5.1.9) Since cx = 0, this implies c = 0 or x = 0. But we can not take c = 0 because that does not satisfies the equations (5.1.6) and (5.1.9). Also, we can not take x = 0, because that does not satisfies the equations (5.1.1) and (5.1.2). We conclude that there is no solution to these equations, hence we cannot write A as product of two matrices, B ∈ M3×1(Z+) and C ∈ M1×3(Z+). So, the factor rank of A is not equal to one. Now, is the factor rank of A is equal to two or not? Let us try to find two matrices B ∈ M3×2(Z+) and C ∈ M2×3(Z+) such that A = BC. 31  3 4 0 0 0 3 0 0 3  =  x1 x2 y1 y2 z1 z2  ·  a1 a2 a3b1 b2 b3  =  a1x1 + b1x2 a2x1 + b2x2 a3x1 + b3x2 a1y1 + b1y2 a2y1 + b2y2 a3y1 + b3y2 a1z1 + b1z2 a2z1 + b2z2 a3z1 + b3z2  . This corresponds to the system of equations: a1x1 + b1x2 = 3 (5.1.10) a2x1 + b2x2 = 4 (5.1.11) a3x1 + b3x2 = 0 (5.1.12) a1y1 + b1y2 = 0 (5.1.13) a2y1 + b2y2 = 0 (5.1.14) a3y1 + b3y2 = 3 (5.1.15) a1z1 + b1z2 = 0 (5.1.16) a2z1 + b2z2 = 0 (5.1.17) a3z1 + b3z2 = 3 (5.1.18) From (5.1.13), we have a1y1 = 0 which implies a1 = 0 or y1 = 0. Suppose that a1 = 0. By substitution a1 = 0 in (5.1.10) we get b1x2 = 3 which implies b1 ! 0 and x2 ! 0. From (5.1.16), we have b1z2 = 0 which implies z2 = 0. From (5.1.17) we have a2z1 = 0, this implies a2 = 0 or z1 = 0. Suppose that a2 = 0. By substitution a2 = 0 in (5.1.11) we get b2x2 = 4, this means that b2 ! 0 and x2 ! 0. From (5.1.18) we have a3z1 = 3, this means that a3 ! 0 and z1 ! 0. From (5.1.12) we have a3x1 = 0 and b3x2 = 0, since x2 ! 0 so b3 = 0, and since a3 ! 0 so x1 = 0. From (5.1.10) we have b1x2 = 3 this means b1 = 1 and x2 = 3, or b1 = 3 and x2 = 1, but from (5.1.11) we have b2x2 = 4, if b1 = 1 and x2 = 3 in (5.1.10) then this does not satisfy 32 (5.1.11), because b2x2 = 4, so it must to be b1 = 3 and x2 = 1, hence b2 = 4 in (5.1.11). From equations (5.1.15) and (5.1.18), we have a3y1 = 3 and a3z1 = 3 respectively, these imply a3 = 3, y1 = z1 = 1 or a3 = 1, y1 = z1 = 3. Suppose that a3 = 1, y1 = z1 = 3. Now, we have A =  3 4 0 0 0 3 0 0 3  =  0 1 3 0 3 0  ·  0 0 13 4 0  . Hence, the factor rank of A is equal to two, rank(A) = 2. Let us determine the column rank of A (c(A)). Is c(A) = 2? L   3 0 0  ,  4 0 0  ,  0 3 3   = m ·  3 0 0  + n ·  4 0 0  + p ·  0 3 3  : m, n, p ∈ Z  =   3m + 4n 3p 3p  : m, n, p ∈ Z  . Suppose that  a b c  and  d e f  form a generating set for this space, for some a, b, c, d, e, f ∈ Z+. So, 3 0 0  = r  a b c  + s  d e f  for some r, s ∈ Z+. It follows that ra + sd = 3 (5.1.19) rb + se = 0 (5.1.20) rc + s f = 0 (5.1.21) 33 So, rb = se = rc = s f = 0. 1. If r ! 0 and s ! 0, we get b = e = c = f = 0. So, our vectors would be a 0 0  and  d 0 0  . It is clear that  0 3 3  # L   a 0 0  ,  d 0 0   , so this case is impossible. 2. If r = 0, s ! 0, we get that 3 = sd and e = f = 0. Our vectors would be a b c  and  d 0 0  , where sd = 3 for some s ∈ Z+. So, d ∈ {1, 3}. If d = 1, we would get 1 0 0  ∈   3m + 4n 3p 3p  : m, n, p ∈ Z  . So, 1 = 3m + 4n for some m, n ∈ Z+. This is clearly impossible and we conclude that d = 3. So the generating vectors would be a b c  and  3 0 0  , So,  0 3 3  = t  a b c  + u  3 0 0  , 34 for some s, t, ∈ Z+. We get ta + 3u = 0 (5.1.22) tb = 3 (5.1.23) tc = 3. (5.1.24) From (5.1.23) it follows that t ! 0 and from that and (5.1.22) it follows that a = 0. From (5.1.23) and (5.1.24) it follows that b = c. Our generating vectors would be 0 b b  and  3 0 0  . So,  4 0 0  = v  0 b b  + w  3 0 0  for some v,w ∈ Z+. It follows that 4 = 3w, so 3 | 4 which is not true. So, we get a contradiction. 3. The case r ! 0, s = 0 is analogous to this case — it is also impossible. 4. r = 0, s = 0 is also impossible, since 3 ! 0. We conclude that c(A) = 3. Similar discussion shows that the row rank of A, r(A) is equal to 3. Let us determine the spanning column rank of A (sc(A)). Is sc(A) = 1? We note that (∀m ∈ Z+)m ·  3 0 0  !  4 0 0  because (∀m ∈ Z+) 3m ! 4, so, sc(A) ! 1. 35 Is sc(A) = 2? We note that (∀m, n ∈ Z+)m ·  3 0 0  + n ·  4 0 0  !  0 3 3  If (∃m, n ∈ Z+)m ·  3 0 0  + n ·  0 3 3  =  4 0 0  , we get 3 | 4. The other case is similar to this; so, sc(A) ! 2. Hence, the spanning column rank of A is equal to 3, i.e., sc(A) = 3. In the same way we can find the spanning row rank of A (sr(A)) and sr(A) = 2. Namely, (∀m ∈ Z+)m · ( 3 4 0 ) ! ( 0 0 3 ) so, sr(A) ! 1. So, it is clear that sr(A) = 2. The maximal column rank of A (mc(A)). We can check as before that all the columns are linearly independent (we have practically done that in the discussion of the column rank), so mc(A) = 3. Similarly, for maximal row rank we have mr(A) = 2. Example 17. In this example we will work over the ring R = Z, i.e., (R,+, ·) = (Z,+, ·), on the same last matrix, to find all the ranks. We have A =  3 4 0 0 0 3 0 0 3  . We find the same results that we get in the semiring Z+ about: t(A), z(A), rank(A), r(A), sc(A), sr(A), mc(A) and mr(A), where t(A) = 2, z(A) = 3, rank(A) = 2, r(A) = 2, sr(A) = 2, mc(A) = 2, and mr(A) = 2. But c(A) ! 3 as in the semiring Z+. Note that 1 0 0  =  4 0 0  −  3 0 0  36 and we get that the basis for the column space is   1 0 0  ,  0 3 3   , hence, c(A) = 2. Example 18. Let (S ,+, ·) = (Z+,",!) be a commutative semiring, where a " b := gcd{a, b} and a! b := lcm{a, b} , for a and b in Z+. Let A =  3 4 0 0 0 3 0 0 3  ∈ M3(S ) We find out that all the ranks of A, (t(A), z(A), rank(A),mc(A),mr(A) ) are as in the exam- ple (Z+,+, ·), but the difference is in the column rank of A. To show that in the following: The column rank (c(A)): It is easy to see that the linear span of the columns of A is L   3 0 0  ,  4 0 0  ,  0 3 3   = L   1 0 0  ,  0 3 3   . Namely, 3 ⊕ 4 = 1. So the column rank of A is equal to two, i.e., c(A) = 2 . The spanning column rank of A (sc(A)). Note that 3 0 0  # L   4 0 0  ,  0 3 3   37 Namely, if  3 0 0  = m!  4 0 0 " n!  0 3 3  =  m! 4" n! 0 m! 0" n! 3 m! 0" n! 3  3 0 0  =  lcm{m, 4} lcm{3, n} lcm{3, n}  But, 3 ! lcm{m, 4} because 4 ! 3 . Similarly, 4 0 0  # L   3 0 0  ,  0 3 3    0 3 3  # L   4 0 0  ,  3 0 0   = L   1 0 0   , hence sc(A) = 3. 5.2 Permanent rank In this section we introduce a new rank for matrices over semirings, based on permanents. Recall that for A ∈ Mn(S ), the permanent of A, denoted by per(A) is defined by per(A) := ∑ σ∈Sn a1σ(1)a2σ(2) · · · anσ(n). 38 Definition 28. Let A ∈ Mn(S ). For each k = 1, . . . , n, Ik(A) will denote the ideal in S generated by all permanents of k × k submatrices of A. Thus, to compute Ik(A), calculate the permanents of all k × k submatrices of A and then find the ideal of S these permanents generate. Laplace expansion, which also holds for semirings, implies that permanents of (k + 1) × (k + 1) submatrices of A lie in Ik(A). Thus, we have the following chain of ideals in S : In(A) ⊆ In−1(A) ⊆ · · · ⊆ I2(A) ⊆ I1(A) ⊆ I0(A) = S . It will be notationally convenient to extend the definition of Ik(A) to all values of k ∈ Z as follows: Ik(A) =  (0), if k > n S , if k ≤ 0. Then we have (0) = In+1(A) ⊆ In(A) ⊆ · · · ⊆ I1(A) ⊆ I0(A) = S . We can now consider this sequence of ideals. Computing the annihilator of each ideal in this sequence, we get the following chain of ideals. (0) = AnnS (S ) ⊆ AnnS (I1(A)) ⊆ AnnS (I2(A)) ⊆ · · · ⊆ AnnS (In(A)) ⊆ AnnS (In+1(A)) = S . Notice that if AnnS (Ik(A)) ! (0), then AnnS (Ir(A)) ! (0) for all r ≥ k. Thus, the following definition makes perfectly good sense. Definition 29. The permanent rank of a matrix A ∈ Mn(S ), denoted by permrank(A), is the maximum integer k such that the annihilator of the ideal generated by all permanents of k × k submatrices of A is zero, i.e., permrank(A) = max{k : AnnS (Ik(A)) = (0)} Basic properties of this rank function are determined in the following theorem. 39 Theorem 7. Let A ∈ Mn(S ). (a) 0 ≤ permrank(A) ≤ n. (b) permrank(A) = permrank(At). (c) permrank(A) = 0 if and only if AnnS (I1(A)) ! (0). (d) permrank(A) < n if and only if per(A) ∈ Z(S ), the set of all zero divisors in S Proof. (a) I0(A) = S , and AnnS (S ) = (0). Thus, permrank(A) ≥ 0. On the other hand, if k > n, then Ik(A) = (0) and AnnS ((0)) = S . Therefore, permrank(A) ≤ n. (b) Since we have Iα(A) = Iα(At) for all α ∈ Z, then AnnS (Iα(A)) = AnnS (Iα(At)), hence permrank(A) = permrank(At). (c) Suppose permrank(A) = 0. That means max{k : AnnS (Ik(A)) = (0)} = 0. So k = 0 is maximum of all k such that AnnS (Ik(A)) = (0).That means for all k > 0 we have AnnS (Ik(A)) ! (0). Hence AnnS (I1(A)) ! (0). Conversely, assume that AnnS (I1(A)) ! (0). Since AnnS (I1(A)) ⊆ AnnS (I2(A)) ⊆ · · · ⊆ AnnS (In(A)) ⊆ AnnS (In+1(A)) = AnnS ((0)) = S , then for all k > 1, AnnS (Ik(A)) ! 0, but AnnS (I0(A)) = (0). Hence permrank(A) = 0. (d) Suppose per(A) ∈ Z(S ). So, there exists s ∈ S such that, per(A) · s = 0. There- fore, s ∈ AnnS (In(A)) and AnnS (In(A)) ! 0. It follows that permrank(A) < n. Con- versely, assume that permrank(A) < n. That means max{k : AnnS (Ik(A)) = (0)} < n, i.e., AnnS (In(A)) ! 0. It follows that there exists s ∈ AnnS (In(A))\{0}. So, s · per(A) = 0 and per(A) ∈ Z(S ). ! Corollary 2. Let A ∈ Mn(S ). Then if per(A) ∈ U(S ) then permrank(A) = n. Proof. Suppose per(A) ∈ U(S ), so per(A) # Z(S ). From (e) in last theorem this implies that permrank(A) = n. ! We will discuss examples of the permrank for matrices over commutative semirings with lots of zero divisors. Let X be any nonempty set. We use the commutative semiring (S ,+, ·) = (P(X),∪,∩). So, in the following, a, b, c, d are subsets of X. Let A =  a bc d  ∈ M2(S ), 40 such that a, b, c, d ⊆ X . We will compute Ik(A), k = 0, 1, 2. I0(A) = S , I1(A) = 〈a, b, c, d〉, I2(A) = 〈ad + bc〉 (we denote by 〈a1, . . . , an〉 the ideal in S generated by the elements a1, . . . , an. We have 〈ad+bc〉 ⊆ 〈a, b, c, d〉, i.e., I2(A) ⊆ I1(A), and AnnS (〈a, b, c, d〉) ⊆ AnnS (〈ad+ bc〉) . Note that, z ∈ AnnS (〈a, b, c, d〉) if and only if z · a = 0⇔ z ∩ a = ∅ z · b = 0⇔ z ∩ b = ∅ z · c = 0⇔ z ∩ c = ∅ z · d = 0⇔ z ∩ d = ∅ If there is z ! ∅ satisfying these conditions, then a∪b∪c∪d ! X. Namely, If a∪b∪c∪d = X, then, since z ⊆ X, z = z ∩ X = z ∩ (a ∪ b ∪ c ∪ d) = (z ∩ a) ∪ (z ∩ b) ∪ (z ∩ c) (z ∩ d) = ∅ ∪ ∅ ∪ ∅ ∪ ∅ = ∅. So, AnnS (〈a, b, c, d〉) ! {0}⇔ a∪b∪c∪d ! X. Also, AnnS (〈ad+bc〉) ! {0}⇔ ad+bc ! X. So if a ∪ b ∪ c ∪ d ! X, permrank(A) = 0. If a ∪ b ∪ c ∪ d = X, and ad + bc ! X, then permrank(A)) = 1. Finally if ad + bc = X, permrank(A) = 2. We discussed the permrank for matrices in M2(S ) , and we will discuss the permrank for some matrices in M3(S ). Let A =  a b c b a b c b a  ∈ M3(S ), 41 where (S ,+, ·) = (P(X),∪,∩), X is any set, and a, b, c ⊆ X . We will compute Ik(A), k = 0, 1, 2, 3. I0(A) = S , I1(A) = 〈a, b, c〉, I2(A) = 〈a+b, ab+bc, b+ac, ab+bc, a+c, ab+bc, b+ac, ba+cb, a+b〉 = 〈a+b, a+c, b+ac〉, I3(A) = 〈a + bc + cb + ac + ba + ab〉 = 〈a + bc + ac + ab〉 = 〈a + bc〉. We explain how to compute I3(A) in the following: per  a b c b a b c b a  = a · per  a bb a  + b · per  b bc a  + c · per  b ac b  = a · (a2 + b2) + b · (ba + bc) + c · (b2 + ac) = a + ab + ab + bc + bc + ac, since ac = a ∩ c ⊆ a , and a + ac = a ∪ (a ∩ c) = a then I3(A) = (a + bc). We have (a + bc) ⊆ (a + b, a + c, b + ac) ⊆ (a, b, c), i.e., I3(A) ⊆ I2(A) ⊆ I1(A), and AnnS (〈a, b, c〉) ⊆ AnnS (〈a + b, a + c, b + ac〉) ⊆ AnnS (〈a + bc〉). We have AnnS (〈a, b, c〉) ! {0}⇔ a ∪ b ∪ c ! X Note, z ∈ AnnS (〈a, b, c〉)⇔ z · a = 0 z · b = 0 z · c = 0. If a ∪ b ∪ c ! X , then there exists x ∈ X \ (a ∪ b ∪ c) . We take z = {x}, then z ∩ a = ∅, z ∩ b = ∅, z ∩ c = ∅ . If a ∪ b ∪ c = X , then AnnS (〈a, b, c〉) = {0}. Also, z ∈ AnnS (〈a + b, a + c, b + ac〉) ⇔ z · (a + b) = 0 z · (a + c) = 0 z · (b + ac) = 0. 42 AnnS (〈a + b, a + c, b + ac〉) ! {0}⇔ (a + b) + (a + c) + (b + ac) ! X a + b + c + ac ! X a + b + c ! X AnnS (〈a + bc〉) ! {0}⇔ a + bc ! X. Now, we apply that on a numerical example. Example 19. Let X = {1, 2, 3, 4, 5}, a = {1, 2, 3}, b = {3, 4}, c = {4, 5}. We see that a∪b∪c = X ⇒ AnnS (〈a, b, c〉) = {0}, AnnS (〈a+b, a+c, b+ac〉) = {0}, but AnnS (〈a+bc〉) ! {0}, because a ∪ (b ∩ c) = {1, 2, 3} ∪ {4} ! X , so there is z ∈ AnnS (〈a + bc〉) such that z ∈ X \ (a ∪ b ∪ c) , i.e., z = {5} so AnnS (〈a + bc〉) ! {0}, i.e., I3(A) ! {0}. Hence, permrank(A) = 2. Example 20. Let S = Z/6Z = {0, 1, 2, 3, 4, 5}. (a) Suppose A =  2 20 2  ∈ Mn(S ) Clearly A is a nonzero matrix, and every entry in A is a zero divisor in S . I2(A) = 〈4〉 = 4S , I1(A) = 〈0, 2〉 = 2S , and AnnS (I2(A)) = AnnS (4S ) = 3S ! (0),AnnS (I1(A)) = AnnS (2S ) = 3S ! (0). Thus, permrank(A) = 0 (by using (c) of the theorem). (b) Let B =  2 00 3  ∈ Mn(S ) Every entry in B is a zero divisor in S . Since per(B) = (0), (d) implies permrank(B) < 2. Since I1(B) = 〈0, 2, 3〉 = 2S + 3S = S ,AnnS (I1(B)) = AnnS (S ) = (0), so from (d) permrank(B) ! 0. Therefore permrank(B) = 1. (c) Suppose 43 C =  1 23 5  ∈ Mn(S ) We have per(C) = 5 ∈ U(S ). Then permrank(C) = 2 by Corollary 2. 44 Bibliography [1] Javed Ahsan, John N. Mordeson, Muhammad Shabir, Fuzzy Semirings with Appli- cations to Automata Theory, Studies in Fuzziness and Soft Computing: vol. 278, Springer, Berlin, 2012. [2] W. C. Brown, Matrices over commutative rings, Marcel Dekker, Inc. (1993) [3] P. Butkovicˇ, Max-algebra: the linear algebra of combinatorics? Linear Algebra Appl. 367 (2003) 313–335. [4] R. A. Cuninghame-Green, Minimax Algebra, Lecture notes in Economics andMath- ematical Systems, vol. 166, Springer, Berlin, 1979. [5] R. A. Cunninghame-Green, P. Butkovicˇ, Bases in max-algebra, Linear Algebra Appl. 389 (2004) 107–120. [6] S. Ghosh (1996) Matrices over Semirings, Information Sciences 90, 221–230. [7] J. S. Golan, Semirings and Their Applications, Kluwer Academic Publishers, Dor- drecht/Boston/London, 1999. [8] Alexander Guterman, Rank and determinant functions for matrices over semirings, London Mathematical Society Lecture Notes 347 (2007), 1–33. [9] Asmaa M. Kanan, Zoran Z. Petrovic´, Note on cardinality of bases in semilinear spaces over zerosumfree semirings, Linear Algebra Appl. 439(2013), no. 10, 2795– 2799. 45 [10] Asmaa M. Kanan, Zero divisors for matrices over commutative semirings, Sci- enceAsia, to appear. [11] P. Moller, Theorie alge`brique des syste`mes a e´ve`nements discrets, The`se de doctorat, Paris, 1988. [12] A. Di Nola, A. Lettieri, I. Perfilieva, V. Nova´k, Algebraic analysis of fuzzy systems, Fuzzy Sets and Systems 158 (2007) 1–22. [13] C. Reutenauer, H. Straubing (1984) Inversion of Matrices over a Commutative Semiring, Journal of Algebra 88, 350–360. [14] Qian-yu Shu, Xue-ping Wang, Bases in semilinear spaces over zerosumfree semir- ings, Linear Algebra Appl. 435 (2011) 2681–2692. [15] S. Sombatboriboon, W. Mora, Y. Kemprasit (2011) Some results concerning invert- ible matrices over semirings, ScienceAsia 37, 130–135. [16] Y. J. Tan, On invertible matrices over antirings, Linear Algebra Appl. 423 (2007) 428–444. [17] E. Wagneur, Moduloı¨ds and pseudomodules. 1. Dimension theory, Discrete Math. 98 (1991) 57–73. [18] Shan Zhao, Xue-ping Wang, Invertible matrices and semilinear spaces over commu- tative semirings, Inform. Sci. 180 (2010) 5115–5124. [19] Shan Zhao, Xue-ping Wang, Bases in semilinear spaces over join-semirings, Fuzzy Sets and Systems 182 (2011) 93–100. ‘ 46 Biografija autora Asma M. Kanan je rodjena 12. 06. 1979. godine u Sabrati u Libiji. Godine 2001. zavrsˇila je osnovne studije na Departmanu za matematiku u okviru Fakulteta za nauke Univerziteta 7. april u Zaviji u Libiji. 2005. godine je stekla zvanje mastera na istom departmanu. Od 2001. do 2005. godine je radila kao asistent na Univerzitetu 7. april u Zaviji, a od 2005. do 2008. kao predavacˇ na istom univerzitetu. Drzˇala je nastavu iz sledec´ih oblasti: Linearna algebra, Analiticˇka geometrija, Prostorna geometrija, Obicˇne diferencijalne jednacˇine, Kompleksna analiza, Diferencijalna geometrija i Linearno programiranje. Doktorske studije na Matematicˇkom fakultetu u Beogradu je zapocˇela u sˇkolskoj 2009/10 godini.