Welcome to the MRC, Michel!
A scan would be very nice indeed. And.. be honest.. how many times have you cursed yourself for making that deal?
A scan would be very nice indeed. And.. be honest.. how many times have you cursed yourself for making that deal?
Well, I am still sorry for it today...
Anyway, I've put 2 scans on my website, note that the original pictures have been made 12 years ago ! So, don't complain about the quality.
www.denwa.nl/images/MSXCD1.jpg
www.denwa.nl/images/MSXCD2.jpg
These links are fullscreen and 512 kB each.
This time, I'll capture this topic back!
It seems, that I messed up SCC and PSG volumes back in the previous post (22.7.2003). So it was PSG that was logarithmic... So... The idea above works for SCC (I think!)
If this volume = 2^-((15-n)/2) is correct for PSG there are plenty of possibilitys to make sample playback tables. Here are two examples of lo-volume 3bit sample playback tables. They are both easy update quite a much larger to gain more volume and accuracy:
PSG Channel ALT 1 ALT 2 A B C A B C ----------------- 1 1 1 0 0 0 =0 3 0 0 1 1 0 =1 3 1 1 2 1 0 =2 4 1 0 3 1 0 =3 3 3 1 3 2 0 =4 4 2 0 3 3 0 =5 3 3 3 3 3 2 =6 5 1 0 5 2 1 =7
I think, that these tables are actually "too accurate". To get best possible result I think that table up to 10bits should be calculated and then estimate good amount of playback bits and cut part of upper and lower bits away of the table.
@NYYRIKKI
do not run too fast :-)
There are better strategies.
One of them is to use successive refinements in quantization.
The basic idea is
quantize a sample x using single channel, play it on ch A
compute the quantization error y = x - Q(x) and quantize it, play the error on ch B
compute the new quantization error y = x - Q(x) - Q (x - Q(x) ) and play it on C
You'll see soon more details on Grau's website
AR
Hmm... Is it always the closest possible solution when biggest possible number is selected first?
Right . Check out the MSX Assembly Page now. It explains about 8-bit samples, with the help of ARTRAG.
~Grauw
Very interesting table... too bad, I don't read C in any form...there are only 19 deviations past a certain threshold Did you calculate what is the error %?
Did you test, do you get any better results, if you try to add for example sqrt(2) or 0.5 to every sample value?
I just noticed, that there is "DB" in front of table rows... that should be "DW"
Here is (untested) example routine, that plays sample using that table:
; HL = SAMPLE TO PLAY (8bit unsigned) ; DE = LENGHT TO PLAY ; TABLE = ADDRESS OF SAMPLE TABLE ; RATE = DELAY BETWEEN SAMPLES ; LD C,#A1 LOOP: PUSH DE PUSH HL LD DE,TABLE LD A,8 LD L,(HL) LD H,0 ADD HL,HL ADD HL,DE RLD LD D,A RLD LD E,A RLD INC HL LD B,(HL) OUT (#A0),A INC A OUT (C),B OUT (#A0),A OUT (C),D INC A OUT (#A0),A OUT (C),E POP HL INC HL LD B,RATE DELAY: DJNZ DELAY POP DE DEC DE LD A,D OR E JP NZ,LOOP RET
Ps. This works only, if table is in RAM
Ah, DW, hehe . Corrected it.
Did you calculate what is the error %?
No, not really... I think I still need to improve it a little, because the means to determine the best set is still sub-optimal.
Did you test, do you get any better results, if you try to add for example sqrt(2) or 0.5 to every sample value?
No... How would that improve results? The PSG has the most accuracy in the lower areas (due to its logarithmic scale).
~Grauw
With regard to the error percentage... It is a bit hard to calculate.
The number 19 I mentioned in the article is the number of points where the deviation is bigger than 0.2% (actually: 1/256*.5) from the value it is supposed to be.