> but for some reason decided not to make integer types available to programmers
They were there, you had to append % to the variable name to get it (e.g. A% or B%, similar to $ for strings). But integers were not the "default."
BASIC is all about letting you pull stuff out of thin air. No pre-declaring variables needed, or even arrays (simply using an array automatically DIM's it for 10 elements if you don't earlier DIM it yourself). Integer variables on BASIC were 16-bit signed so you couldn't go higher than 32767 on them. But if you are going to use your $500 home computer in 1980 as a fancy calculator, just learning about this newfangled computer and programming thing, that's too limiting.
I do remember reading some stuff on the C64 that its BASIC converted everything to floating point anyway when evaluating expressions, so using integer variables was actually slower. This also includes literals. It was actually faster to define a float variable as 0--e.g. N0=0--and use N0 in your code instead of the literal 0.
Floats were 5 bytes in the early 80's Microsoft BASICs, honestly not "massive" unless you did a large array of them. The later IBM BASICs did have a "double precision" float that was 12 bytes maybe?
> It probably would have been faster and more memory efficient if all numbers were BCD strings.
I wouldn't be surprised if Mr. Gates seriously considered that during the making of Microsoft BASIC in the late 70's as it makes it easy for currency calculations to be accurate.
They were there, you had to append % to the variable name to get it (e.g. A% or B%, similar to $ for strings). But integers were not the "default."
BASIC is all about letting you pull stuff out of thin air. No pre-declaring variables needed, or even arrays (simply using an array automatically DIM's it for 10 elements if you don't earlier DIM it yourself). Integer variables on BASIC were 16-bit signed so you couldn't go higher than 32767 on them. But if you are going to use your $500 home computer in 1980 as a fancy calculator, just learning about this newfangled computer and programming thing, that's too limiting.
I do remember reading some stuff on the C64 that its BASIC converted everything to floating point anyway when evaluating expressions, so using integer variables was actually slower. This also includes literals. It was actually faster to define a float variable as 0--e.g. N0=0--and use N0 in your code instead of the literal 0.
Floats were 5 bytes in the early 80's Microsoft BASICs, honestly not "massive" unless you did a large array of them. The later IBM BASICs did have a "double precision" float that was 12 bytes maybe?
> It probably would have been faster and more memory efficient if all numbers were BCD strings.
I wouldn't be surprised if Mr. Gates seriously considered that during the making of Microsoft BASIC in the late 70's as it makes it easy for currency calculations to be accurate.