Oh of course.
In that case, what if
(a11, a12, a13, ..., a1n) \neq lambda(am1, am2, am3, ..., amn)
(sorry for the laziness)
In general, what we need (and is sufficient) to conclude that
0 is the only solution is that the columns of the matrix are what is called
linearly independent (in the case of a square matrix, we can also replace the word 'columns' in the above with 'rows').
Basically what this means is that all the equations of our square matrix need to truly be 'different'; none of them should be able to be obtained from the other ones using linear combinations (i.e. by adding them or by adding scalar multiples of them).
E.g. Consider this homogeneous system:
x + y + z = 0
2x + y + z = 0
3x + 2y + 2z = 0.
Here, Equation (row) 3 is simply a sum of the previous two. It turns out that this implies there'll be infinitely many solutions (even though none of the rows are multiples of another; it's the linear combination thing rather than multiple thing that's important. In the case of two rows though, linear combination becomes equivalent to multiple; not the case for higher numbers of rows.)
The last equation is essentially not a 'different' equation. It is 'redundant' as we can 'get' it from the previous two equations (and by 'get', we mean it is a
linear combination of the others). So this system is like having just two equations, so it's not surprising there'd be infinitely many solutions. This idea generalises to higher numbers of rows too.
Edit: answered above.