jeroenherczeg
commited on
Commit
•
232177e
1
Parent(s):
db7e2b9
Upload 14 files
Browse files- johnc_plan_1996.txt +0 -0
- johnc_plan_1997.txt +0 -0
- johnc_plan_1998.txt +1189 -0
- johnc_plan_1999.txt +0 -0
- johnc_plan_2000.txt +342 -0
- johnc_plan_2001.txt +129 -0
- johnc_plan_2002.txt +104 -0
- johnc_plan_2003.txt +70 -0
- johnc_plan_2004.txt +37 -0
- johnc_plan_2005.txt +49 -0
- johnc_plan_2006.txt +25 -0
- johnc_plan_2007.txt +81 -0
- johnc_plan_2009.txt +288 -0
- johnc_plan_2010.txt +73 -0
johnc_plan_1996.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
johnc_plan_1997.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
johnc_plan_1998.txt
ADDED
@@ -0,0 +1,1189 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for Jan 01, 1998
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
Some of the things I have changed recently:
|
6 |
+
|
7 |
+
* fixed the cinematics
|
8 |
+
* don't clear config after dedicated server
|
9 |
+
* don't reallocate sockets unless needed
|
10 |
+
* don't process channel packets while connecting
|
11 |
+
* rate variable for modem bandwidth choking
|
12 |
+
* delta compress client usercmds
|
13 |
+
* fixed sound quality changing after intermissions
|
14 |
+
* fixed PVS problem when head was directly under solid in GL
|
15 |
+
* added r_drawflat and cl_testlights to cheats
|
16 |
+
|
17 |
+
There are a few problems that I am still trying to track down:
|
18 |
+
|
19 |
+
WSAEADDRNOTAVAIL errors
|
20 |
+
Map versions differ error
|
21 |
+
Sometimes connecting and seeing messages but not getting in
|
22 |
+
Decompression read overrun.
|
23 |
+
|
24 |
+
Of course, we don't actually get any of those errors on any of our systems here, so I am having to work remotely with other users to try and fix them, which is a bit tougher.
|
25 |
+
|
26 |
+
My new years resolution is to improve my coding style by bracing all single line statements and consistantly using following caps on multi word variable names.
|
27 |
+
|
28 |
+
Actually, I am currently trying on the full sun coding style, but I'm not so sure about some of the comment conventions: don't use multiple lines of // comments, and don't use rows of seperating characters in comments. I'm not convinced those are good guidelines.
|
29 |
+
|
30 |
+
|
31 |
+
-----------------------------------------
|
32 |
+
John Carmack's .plan for Jan 02, 1998
|
33 |
+
-----------------------------------------
|
34 |
+
|
35 |
+
Wired magazine does something that almost no other print magazine we have dealt with does.
|
36 |
+
|
37 |
+
They check the statements they are going to print.
|
38 |
+
|
39 |
+
I just got a "fact check" questionaire email from wired about an upcoming article, and I recall that they did this last time the did an article about us.
|
40 |
+
|
41 |
+
Most of the time when we talk with the press, we try to get them to send us a proof of the article for fact checking. They usually roll their eyes, and grudgingly agree, then don't send us anything, or send it to us after it has gone to press.
|
42 |
+
|
43 |
+
Wired had a few errors in their statements, but it won't get printed that way because they checked with us.
|
44 |
+
|
45 |
+
How refreshing.
|
46 |
+
|
47 |
+
--
|
48 |
+
|
49 |
+
A small public announcement:
|
50 |
+
|
51 |
+
The Linux Expo is looking for:
|
52 |
+
|
53 |
+
1. People that develop games or game servers in *nix, and 2. People interested in learning how to develop games in *nix.
|
54 |
+
|
55 |
+
Either one should give a write to ddt@crack.com.
|
56 |
+
|
57 |
+
|
58 |
+
-----------------------------------------
|
59 |
+
John Carmack's .plan for Jan 03, 1998
|
60 |
+
-----------------------------------------
|
61 |
+
|
62 |
+
New stuff fixed:
|
63 |
+
|
64 |
+
* timeout based non-active packet streams
|
65 |
+
* FS_Read with CD off checks
|
66 |
+
* dedicated server not allocate client ports
|
67 |
+
* qport proxy checking stuff
|
68 |
+
* fixed mouse wheel control
|
69 |
+
* forced newlines on several Cbuf_AddText ()
|
70 |
+
* if no nextmap on a level, just stay on same one
|
71 |
+
* chat maximums to prevent user forced overflows
|
72 |
+
* limit stringcmds per frame to prevent malicious use
|
73 |
+
* helped jumping down slopes
|
74 |
+
* checksum client move messages to prevent proxy bots
|
75 |
+
* challenge / response connection process
|
76 |
+
* fixed rcon
|
77 |
+
* made muzzle flash lights single frame, rather than 0.1 sec
|
78 |
+
|
79 |
+
I still don't have an answer to the WAADRNOTAVAILABLE problem. I have made the packet stream as friendly as possible, but some computers are still choking.
|
80 |
+
|
81 |
+
I managed to get fixes for address translating routers done without costing any bandwidth from the server, just a couple bytes from the client, which isn't usually a critical path.
|
82 |
+
|
83 |
+
I have spent a fair amount of time trying to protect against "bad" users in this release. I'm sure there will be more things that come up, but I know I got a few of the ones that are currently being exploited.
|
84 |
+
|
85 |
+
We will address any attack that can make a server crash. Other attacks will have to have the damage and prevelence weighed against the cost of defending against it.
|
86 |
+
|
87 |
+
Client message overflows. The maximum number of commands that can be issued in a user packet has been limited. This prevents a client from doing enough "says" or "kills" to overflow the message buffers of other clients.
|
88 |
+
|
89 |
+
Challenge on connection. A connection request to a server is now a two stage process of requesting a challenge, then using it to connect. This prevents denial of service attacks where connection packets with forged IPs are flooded at a server, preventing any other users from connecting until they timeout.
|
90 |
+
|
91 |
+
Client packet checksumming. The packets are encoded in a way that will prevent proxies that muck with the packet contents, like the stoogebot, from working.
|
92 |
+
|
93 |
+
|
94 |
+
-----------------------------------------
|
95 |
+
John Carmack's .plan for Jan 04, 1998
|
96 |
+
-----------------------------------------
|
97 |
+
|
98 |
+
Version 3.10 patch is now out.
|
99 |
+
|
100 |
+
ftp://ftp.idsoftware.com/idstuff/quake2/q2-310.exe
|
101 |
+
|
102 |
+
A few more minor fixes since yesterday:
|
103 |
+
|
104 |
+
* qhost support
|
105 |
+
* made qport more random
|
106 |
+
* fixed map reconnecting
|
107 |
+
* removed s_sounddir
|
108 |
+
* print out primary / secondary sound buffer status on init
|
109 |
+
* abort game after a single net error if not dedicated
|
110 |
+
* fixed sound loss when changing sound compatability
|
111 |
+
* removed redundant reliable overflow print on servers
|
112 |
+
* gl_lockpvs for map development checking
|
113 |
+
* made s_primary 0 the default
|
114 |
+
|
115 |
+
Christian will be updating the bug page tomorrow. So hold of on all reporting for 24 hours, then check the page to make sure the bug is not already known.
|
116 |
+
|
117 |
+
http://www.idsoftware.com/cgi-win/bugs.exe
|
118 |
+
|
119 |
+
All bug reports should go to Christian: xian@idsoftware.com.
|
120 |
+
|
121 |
+
I have had several cases of people with lockup problems and decompression overreads having their problems fixed after they mentioned that they were overclocking either their CPU, their system bus (to 75mhz), or their 3DFX.
|
122 |
+
|
123 |
+
It doesn't matter if "it works for everything else", it still may be the source of the problem.
|
124 |
+
|
125 |
+
I know that some people are still having problems with vanilla systems, though. I have tried everything I can think of remotely, but if someone from the Dallas area wants to bring a system by our office, I can try some more serious investigations.
|
126 |
+
|
127 |
+
Something that has shown to help with some 3dfx problems is to set "cl_maxfps 31", which will keep the console between level changes from rendering too fast, which has caused some cards to hang the system.
|
128 |
+
|
129 |
+
|
130 |
+
-----------------------------------------
|
131 |
+
John Carmack's .plan for Jan 09, 1998
|
132 |
+
-----------------------------------------
|
133 |
+
|
134 |
+
We got 70 people on a base100 server, and it died after it wedged at 100% utilization for a while. Tomorrow we will find exactly what overflowed, and do some profiling.
|
135 |
+
|
136 |
+
Base100 is really only good for 50 or so players without overcrowding, but we have another map being built that should hold 100 people reasonably well.
|
137 |
+
|
138 |
+
I will look into which will be the easier path to more server performance: scalar optimization of whatever is critical now, or splitting it off into some more threads to run on multiple processors. Neither one is trivial.
|
139 |
+
|
140 |
+
My goal is to be able to host stable 100 player games in a single map.
|
141 |
+
|
142 |
+
I just added a "players" command that will dump the total number of players in the game, and as many frags/names as it can fit in a packet (around 50, I think).
|
143 |
+
|
144 |
+
|
145 |
+
-----------------------------------------
|
146 |
+
John Carmack's .plan for Jan 11, 1998
|
147 |
+
-----------------------------------------
|
148 |
+
|
149 |
+
I AM GOING OUT OF TOWN NEXT WEEK, DON'T SEND ME ANY MAIL!
|
150 |
+
|
151 |
+
Odds are that I will get back and just flush the 500 messages in my mailbox.
|
152 |
+
|
153 |
+
No, I'm not taking a vacation. Quite the opposite, in fact.
|
154 |
+
|
155 |
+
I'm getting a hotel room in a state where I don't know anyone, so I can do a bunch of research with no distractions.
|
156 |
+
|
157 |
+
I bought a new computer specifically for this purpose - A Dolch portable pentium-II system. The significant thing is that it has full length PCI slots, so I was able to put an Evans & Sutherland OpenGL accelerator in it (not enough room for an intergraph Realizm, though), and still drive the internal LCD screen. It works out pretty well, but I'm sure there will be conventional laptops with good 3D acceleration available later this year.
|
158 |
+
|
159 |
+
This will be an interesting experiment for me. I have always wondered how much of my time that isn't at peak productivity is a necessary rest break, and how much of it is just wasted.
|
160 |
+
|
161 |
+
---
|
162 |
+
|
163 |
+
The client's IP address is now added to the userinfo before calling ClientConnect(), so any IP filtering / banning rules can now be implemented in the game dll. This will also give some of you crazy types the ability to sync up with multiple programs on the client computers outside of Q2 itself.
|
164 |
+
|
165 |
+
A new API entry point has been added to the game dll that gets called whenever an "sv" command is issued on the server console. This is to allow you to create commands for the server operator to type, as opposed to commands that a client would type (which are defined in g_cmds.c).
|
166 |
+
|
167 |
+
---
|
168 |
+
|
169 |
+
We did a bunch of profiling today, and finaly got the information I wanted. We weren't doing anything brain dead stupid in the server, and all of the time was pretty much where I expected it to be.
|
170 |
+
|
171 |
+
I did found two things we can pursue for optimization.
|
172 |
+
|
173 |
+
A moderately expensive catagorization function is called at both the beginning and end of client movement simulation. With some care, we should be able to avoid the first one most of the time. That alone should be good for a >10% server speedup.
|
174 |
+
|
175 |
+
The other major thing is that the client movement simulation accounted for 60% of the total execution time, and because it was already compartmentalized for client side prediction, it would not be much work to make it thread safe. Unfortunately, it would require MAJOR rework of the server code (and some of the game dll) to allow multiple client commands to run in parallel.
|
176 |
+
|
177 |
+
The potential is there to double the peak load that a server can carry if you have multiple processors. Note that you will definately get more players / system by just running multiple independent servers, rather than trying to get them all into a single large server.
|
178 |
+
|
179 |
+
We are not going to pursue either of these optimizations right now, but they will both be looked at again later.
|
180 |
+
|
181 |
+
All this optimizing of the single server is pushing the tail end of a paradigm. I expect trinity to be able to seamlessly hand off between clustered servers without the client even knowing it happened.
|
182 |
+
|
183 |
+
|
184 |
+
-----------------------------------------
|
185 |
+
John Carmack's .plan for Feb 04, 1998
|
186 |
+
-----------------------------------------
|
187 |
+
|
188 |
+
Ok, I'm overdue for an update.
|
189 |
+
|
190 |
+
The research getaway went well. In the space of a week, I only left my hotel to buy diet coke. It seems to have spoiled me a bit, the little distractions in the office grate on me a bit more since. I will likely make week long research excursions a fairly regular thing during non- crunch time. Once a quarter sounds about right.
|
191 |
+
|
192 |
+
I'm not ready to talk specifically about what I am working on for trinity. Quake went through many false starts (beam trees, portals, etc) before settling down on its final architecture, so I know that the odds are good that what I am doing now won't actually be used in the final product, and I don't want to mention anything that could be taken as an implied "promise" by some people.
|
193 |
+
|
194 |
+
I'm very excited by all the prospects, though.
|
195 |
+
|
196 |
+
Many game developers are in it only for the final product, and the process is just what they have to go through to get there. I respect that, but my motivation is a bit different.
|
197 |
+
|
198 |
+
For me, while I do take a lot of pride in shipping a great product, the achievements along the way are more memorable. I don't remember any of our older product releases, but I remember the important insights all the way back to using CRTC wraparound for infinate smooth scrolling in Keen (actually, all the way back to understanding the virtues of structures over parallel arrays in apple II assembly language..). Knowledge builds on knowledge.
|
199 |
+
|
200 |
+
I wind up catagorizing periods of my life by how rich my learning experiences were at the time.
|
201 |
+
|
202 |
+
My basic skills built up during school on apple II computers, but lack of resources limited how far and fast I could go. The situation is so much better for programmers today - a cheap used PC, a linux CD, and an internet account, and you have all the tools and resources necessary to work your way to any level of programming skill you want to shoot for.
|
203 |
+
|
204 |
+
My first six months at Softdisk, working on the PC, was an incredible learning experience. For the first time, I was around a couple of programmers with more experience than I had (Romero and Lane Roath), there were a lot of books and materials available, and I could devote my full and undivided attention to programming. I had a great time.
|
205 |
+
|
206 |
+
The two years following, culminating in DOOM and the various video game console work I did, was a steady increase in skills and knowledge along several fronts - more graphics, networking, unix, compiler writing, cross development, risc architectures, etc.
|
207 |
+
|
208 |
+
The first year of Quake's development was awesome. I got to try so many new things, and I had Michael Abrash as my sounding board. It would probably surprise many classically trained graphics programmers how little I new about conventional 3D when I wrote DOOM - hell, I had problems properly clipping wall polygons (which is where all the polar coordinate nonsense came from). Quake forced me to learn things right, as well as find some new innovations.
|
209 |
+
|
210 |
+
The last six months of Quake's development was mostly pain and suffering trying to get the damn thing finished. It was all worth it in the end, but I don't look back at it all that fondly.
|
211 |
+
|
212 |
+
The development cycle of Quake 2 had some moderate learning experiences for me (glquake, quakeworld, radiosity, openGL tool programming, win32, etc), but it also gave my mind time to sift through a lot of things before getting ready to really push ahead.
|
213 |
+
|
214 |
+
I think that the upcoming development cycle for trinity is going to be at least as rewarding as Quake's was. I am reaching deep levels of understanding on some topics, and I am branching out into several completely new (non-graphics) areas for me, that should cross-polinate well with everything else I am doing.
|
215 |
+
|
216 |
+
There should also be a killer game at the end of it. :)
|
217 |
+
|
218 |
+
|
219 |
+
-----------------------------------------
|
220 |
+
John Carmack's .plan for Feb 09, 1998
|
221 |
+
-----------------------------------------
|
222 |
+
|
223 |
+
Just got back from the Q2 wrap party in vegas that Activision threw for us.
|
224 |
+
|
225 |
+
Having a reasonable grounding in statistics and probability and no belief in luck, fate, karma, or god(s), the only casino game that interests me is blackjack.
|
226 |
+
|
227 |
+
Playing blackjack properly is a test of personal discipline. It takes a small amount of skill to know the right plays and count the cards, but the hard part is making yourself consistently behave like a robot, rather than succumbing to your "gut instincts".
|
228 |
+
|
229 |
+
I play a basic high/low count, but I scale my bets widely - up to 20 to 1 in some cases. Its not like I'm trying to make a living at it, so the chance of getting kicked out doesn't bother me too much.
|
230 |
+
|
231 |
+
I won $20,000 at the tables, which I am donating to the Free Software Foundation. I have been meaning to do something for the FSF for a long time. Quake was deployed on a dos port of FSF software, and both DOOM and Quake were developed on NEXTSTEP, which uses many FSF based tools. I don't subscribe to all the FSF dogma, but I have clearly benefited from their efforts.
|
232 |
+
|
233 |
+
|
234 |
+
-----------------------------------------
|
235 |
+
John Carmack's .plan for Feb 12, 1998
|
236 |
+
-----------------------------------------
|
237 |
+
|
238 |
+
I have been getting a lot of mail with questions about the intel i740 today, so here is a general update on the state of 3D cards as they relate to quake engine games.
|
239 |
+
|
240 |
+
ATI rage pro
|
241 |
+
----
|
242 |
+
On paper, this chip looks like it should run almost decently - about the performance of a permedia II, but with per-pixel mip mapping and colored lighting. With the currently shipping MCD GL driver on NT, it just doesn't run well at all. The performance is well below acceptable, and there are some strange mip map selection errors. We have been hearing for quite some time that ATI is working on an OpenGL ICD for both '95 and NT, but we haven't seen it yet. The rage pro supposedly has multitexture capability, which would help out quite a bit if they implement the multitexture extension. If they do a very good driver, the rage pro may get up to the performance of the rendition cards. Supports up to 16MB, which would make it good for development work if the rest of it was up to par.
|
243 |
+
|
244 |
+
|
245 |
+
3DLabs permedia II
|
246 |
+
------
|
247 |
+
Good throughput, poor fillrate, fair quality, fair features.
|
248 |
+
|
249 |
+
No colored lighting blend mode, currently no mip mapping at all.
|
250 |
+
|
251 |
+
Supports up to 8MB.
|
252 |
+
|
253 |
+
The only currently shipping production full ICD for '95, but a little flaky.
|
254 |
+
|
255 |
+
If 3dlabs implemented per-polygon mip mapping, they would get both a quality and a slight fillrate boost.
|
256 |
+
|
257 |
+
Drivers available for WinNT on the DEC Alpha (but the alpha drivers are very flaky).
|
258 |
+
|
259 |
+
|
260 |
+
Power VR PCX2
|
261 |
+
-----
|
262 |
+
Poor throughput, good fillrate, fair quality, poor features, low price.
|
263 |
+
|
264 |
+
No WinNT support.
|
265 |
+
|
266 |
+
Almost no blend modes at all, low alpha precision.
|
267 |
+
|
268 |
+
Even though the hardware doesn't support multitexture, they could implement the multi-texture extension just to save on polygon setup costs. That might get them a 10% to 15% performance boost.
|
269 |
+
|
270 |
+
They could implement the point parameters extension for a significant boost in the speed of particle rendering. That wouldn't affect benchmark scores very much, but it would help out in hectic deathmatches.
|
271 |
+
|
272 |
+
Their opengl minidriver is already a fairly heroic effort - the current PVR takes a lot of beating about the head to make it act like an OpenGL accelerator.
|
273 |
+
|
274 |
+
|
275 |
+
Rendition v2100 / v2200
|
276 |
+
--------
|
277 |
+
Good throughput, good fillrate, very good quality, good features.
|
278 |
+
|
279 |
+
A good all around chip. Not quite voodoo1 performance, but close.
|
280 |
+
|
281 |
+
v2100 is simply better than everything else in the $99 price range.
|
282 |
+
|
283 |
+
Can render 24 bit color for the best possible quality, but their current drivers don't support it. Future ones probably will.
|
284 |
+
|
285 |
+
Can do 3D on the desktop.
|
286 |
+
|
287 |
+
Rendition should be shipping a full ICD OpenGL, which will make an 8mb v2200 a very good board for people doing 3D development work.
|
288 |
+
|
289 |
+
|
290 |
+
NVidia Riva 128
|
291 |
+
-----
|
292 |
+
Very good throughput, very good fillrate, fair quality, fair features.
|
293 |
+
|
294 |
+
The fastest fill rate currently shipping, but it varies quite a bit based on texture size. On large textures it is slightly slower than voodoo, but on smaller textures it is over twice as fast.
|
295 |
+
|
296 |
+
On paper, their triangle throughput rate should be three times what voodoo gives, but in practice we are only seeing a slight advantage on very fast machines, and worse performance on pentium class machines. They probably have a lot of room to improve that in their drivers.
|
297 |
+
|
298 |
+
In general, it is fair to say that riva is somewhat faster than voodoo 1, but it has a few strikes against it.
|
299 |
+
|
300 |
+
The feature implementation is not complete. They have the blend mode for colored lighting, but they still don't have them all. That may hurt them in future games. Textures can only be 1 to 1 aspect ratio. In practice, that just means that non-square textures waste memory.
|
301 |
+
|
302 |
+
The rendering quality isn't quite as high as voodoo or rendition. It looks like some of their iterators don't have enough precision.
|
303 |
+
|
304 |
+
Nvidia is serious and committed to OpenGL. I am confident that their driver will continue to improve in both performance and robustness.
|
305 |
+
|
306 |
+
While they can do good 3D in a window, they are limited to a max of 4MB of framebuffer, which means that they can't run at a high enough resolution to do serious work.
|
307 |
+
|
308 |
+
|
309 |
+
3DFX Voodoo 1
|
310 |
+
-----
|
311 |
+
The benchmark against which everything else is measured.
|
312 |
+
|
313 |
+
Good throughput, good fillrate, good quality, good features.
|
314 |
+
|
315 |
+
It has a couple faults, but damn few: max texture size limited to 256*256 and 8 to 1 aspect ratio. Slow texture swapping. No 24 bit rendering.
|
316 |
+
|
317 |
+
Because of the slow texture swapping, anyone buying a voodoo should get a six mb board (e.g. Canopus Pure3D). The extra ram prevents some sizable jerks when textures need to be swapped.
|
318 |
+
|
319 |
+
Highly tuned minidriver. They have a full ICD in alpha, but they are being slow about moving it into production. Because of the add-in board nature of the 3dfx, the ICD won't be useful for things like running level editors, but it would at least guarantee that any new features added to quake engine games won't require revving the minidriver to add new functionality.
|
320 |
+
|
321 |
+
|
322 |
+
3DFX Voodoo 2
|
323 |
+
-----
|
324 |
+
Not shipping yet, but we were given permission to talk about the benchmarks on their preproduction boards.
|
325 |
+
|
326 |
+
Excellent throughput, excellent fillrate, good quality, excellent features.
|
327 |
+
|
328 |
+
The numbers were far and away the best ever recorded, and they are going to get significantly better. On quake 2, voodoo 2 is setup limited, not fill rate limited. Voodoo 2 can do triangle strip and fan setup in hardware, but their opengl can't take advantage of it until the next revision of glide. When that happens, the number of vertexes being sent to the card will drop by HALF. At 640*480, they will probably become fill rate bound again (unless you interleave two boards), but at 512*384, they will probably exceed 100 fps on a timedemo. In practice, that means that you will play the game at 60 fps with hardly ever a dropped frame.
|
329 |
+
|
330 |
+
The texture swapping rate is greatly improved, addressing the only significant problem with voodoo.
|
331 |
+
|
332 |
+
I expect that for games that heavily use multitexture (all quake engine games), voodoo 2 will remain the highest performer for all of '98. All you other chip companies, feel free to prove me wrong. :)
|
333 |
+
|
334 |
+
Lack of 24 bit rendering is the only visual negative.
|
335 |
+
|
336 |
+
As with any voodoo solution, you also give up the ability to run 3D applications on your desktop. For pure gamers, that isn't an issue, but for hobbyists that may be interested in using 3D tools it may have some weight.
|
337 |
+
|
338 |
+
|
339 |
+
Intel i740
|
340 |
+
----
|
341 |
+
Good throughput, good fillrate, good quality, good features.
|
342 |
+
|
343 |
+
A very competent chip. I wish intel great success with the 740. I think that it firmly establishes the baseline that other companies (especially the ones that didn't even make this list) will be forced to come up to.
|
344 |
+
|
345 |
+
Voodoo rendering quality, better than voodoo1 performance, good 3D on a desktop integration, and all textures come from AGP memory so there is no texture swapping at all.
|
346 |
+
|
347 |
+
Lack of 24 bit rendering is the only negative of any kind I can think of.
|
348 |
+
|
349 |
+
Their current MCD OpenGL on NT runs quake 2 pretty well. I have seen their ICD driver on '95 running quake 2, and it seems to be progressing well. The chip has the potential to outperform voodoo 1 across the board, but 3DFX has more highly tuned drivers right now, giving it a performance edge. I expect intel will get the performance up before releasing the ICD.
|
350 |
+
|
351 |
+
It is worth mentioning that of all the drivers we have tested, intel's MCD was the only driver that did absolutely everything flawlessly. I hope that their ICD has a similar level of quality (it's a MUCH bigger job).
|
352 |
+
|
353 |
+
An 8mb i740 will be a very good setup for 3D development work.
|
354 |
+
|
355 |
+
|
356 |
+
-----------------------------------------
|
357 |
+
John Carmack's .plan for Feb 16, 1998
|
358 |
+
-----------------------------------------
|
359 |
+
|
360 |
+
8 mb or 12 mb voodoo 2?
|
361 |
+
|
362 |
+
An 8mb v2 has 2 mb of texture memory on each TMU. That is not as general as the current 6mb v1 cards that have 4 mb of texture memory on a single TMU. To use the multitexture capability, textures are restricted to being on one or the other TMU (simplifying a bit here). There is some benefit over only having 2 mb of memory, but it isn't double. You will see more texture swapping in quake on an 8mb voodoo 2 than you would on a 6mb voodoo 1. However, the texture swapping is several times faster, so it isn't necessarily all that bad.
|
363 |
+
|
364 |
+
If you use the 8 bit palettized textures, there will probably not be any noticable speed improvement with a 12 mb voodoo 2 vs an 8 mb one. The situation that would most stress it would be an active deathmatch that had players using every skin. You might see a difference there.
|
365 |
+
|
366 |
+
A game that uses multitexture and 16 bit textures for everything will stress a 4/2/2 voodoo layout. Several of the Quake engine licensees are using full 16 bit textures, and should perform better on a 4/4/4 card.
|
367 |
+
|
368 |
+
The differences probably won't show as significant on timedemo numbers, but they will be felt as little one frame hitches here and there.
|
369 |
+
|
370 |
+
|
371 |
+
-----------------------------------------
|
372 |
+
John Carmack's .plan for Feb 17, 1998
|
373 |
+
-----------------------------------------
|
374 |
+
|
375 |
+
I just read the Wired article about all the Doom spawn.
|
376 |
+
|
377 |
+
I was quoted as saying "like I'm supposed to be scared of Monolith", which is much more derogatory sounding than I would like.
|
378 |
+
|
379 |
+
I haven't followed Monolith's development, and I don't know any of their technical credentials, so I am not in any position to evaluate them.
|
380 |
+
|
381 |
+
The topic of "is microsoft going to crush you now that they are in the game biz", made me a bit sarcastic.
|
382 |
+
|
383 |
+
I honestly wish the best to everyone pursuing new engine development.
|
384 |
+
|
385 |
+
|
386 |
+
-----------------------------------------
|
387 |
+
John Carmack's .plan for Feb 22, 1998
|
388 |
+
-----------------------------------------
|
389 |
+
|
390 |
+
Don't send any bug reports on the 3.12 release to me, I just forward them over to jcash. He is going to be managing all future work on the Quake 2 codebase through the mission packs. I'm working on trinity.
|
391 |
+
|
392 |
+
3.12 answered the release question pretty decisively for me. We were in code freeze for over two weeks while the release was being professionally beta tested, and all it seemed to get us was a two week later release.
|
393 |
+
|
394 |
+
Future releases are going to be of the fast/multiple release type, but clearly labeled as a "beta" release until it stabilizes. A dozen professional testers or fifty amature testers just can't compare to the thousands of players who will download a beta on the first day.
|
395 |
+
|
396 |
+
I have spent a while thinking about the causes of the patches for Q2. Our original plan was to just have the contents of 3.12 as the first patch, but have it out a month earlier than we did.
|
397 |
+
|
398 |
+
The first several patches were forced due to security weaknesses. Lesson learned - we need to design more security conscious to try to protect against the assholes out there.
|
399 |
+
|
400 |
+
The cause for the upcoming 3.13 patch is the same thing that has caused us a fair amount of trouble through Q2's development - instability in the gamex86 code due to its decending from QC code in Q1. It turns out that there were lots of bugs in the original QC code, but because of its safe interpreted nature (specifically having a null entity reference the world) they never really bothered anyone. We basically just ported the QC code to regular C for Q2 (it shows in the code) and fixed crash bugs as they popped up. We should have taken the time to redesign more for C's strengths and weaknesses.
|
401 |
+
|
402 |
+
|
403 |
+
-----------------------------------------
|
404 |
+
John Carmack's .plan for Mar 12, 1998
|
405 |
+
-----------------------------------------
|
406 |
+
|
407 |
+
American McGee has been let go from Id.
|
408 |
+
|
409 |
+
His past contributions include work in three of the all time great games (DOOM 2, Quake, Quake 2), but we were not seeing what we wanted.
|
410 |
+
|
411 |
+
|
412 |
+
-----------------------------------------
|
413 |
+
John Carmack's .plan for Mar 13, 1998
|
414 |
+
-----------------------------------------
|
415 |
+
|
416 |
+
The Old Plan:
|
417 |
+
|
418 |
+
The rest of the team works on an aggressive Quake 2 expansion pack while Brian and I develop tools and code for the entirely new Trinity generation project to begin after the mission pack ships.
|
419 |
+
|
420 |
+
The New Plan:
|
421 |
+
|
422 |
+
Expand the mission pack into a complete game, and merge together a completely new graphics engine with the quake 2 game / client / server framework, giving us Quake 3.
|
423 |
+
|
424 |
+
"Trinity" is basically being broken up into two phases: graphics and everything else. Towards the end of Quake 1's development I was thinking that we might have been better off splitting quake on those categories, but in reverse order. Doing client/server, the better modification framework, and qc, coupled with a spiced up DOOM engine (Duke style) for one game, then doing the full 3D renderer for the following game.
|
425 |
+
|
426 |
+
We have no reason to believe that the next generation development would somehow go faster than the previous, so there is a real chance that doing all of the Trinity technology at once might push game development time to a full two years for us, which might be a bit more than the pressure-cooker work atmosphere here could handle.
|
427 |
+
|
428 |
+
So, we are going to try an experiment.
|
429 |
+
|
430 |
+
The non-graphics things that I was planning for Trinity will be held off until the following project - much java integration with client downloadable code being one of the more significant aspects. I hope to get to some next generation sound work, but the graphics engine is the only thing I am committing to.
|
431 |
+
|
432 |
+
The graphics engine is going to be hardware accelerated ONLY. NO SOFTWARE RENDERER, and it won't work very well on a lot of current hardware. We understand fully that this is going to significantly cut into our potential customer base, but everyone was tired of working under the constraints of the software renderer. There are still going to be plenty of good quake derived games to play from other developers for people without appropriate hardware.
|
433 |
+
|
434 |
+
There are some specific things that the graphics technology is leveraging that may influence your choice of a 3D accelerator.
|
435 |
+
|
436 |
+
All source artwork is being created and delivered in 24 bit color. An accelerator that can perform all 3D work in 24 bit color will look substantially better than a 16 bit card. You will pay a speed cost for it, though.
|
437 |
+
|
438 |
+
Most of the textures are going to be higher resolution. Larger amounts of texture memory will make a bigger difference than it does on Quake 2.
|
439 |
+
|
440 |
+
Some key rendering effects require blending modes that some cards don't support.
|
441 |
+
|
442 |
+
The fill rate requirements will be about 50% more than Quake 2, on average. Cards that are fill rate limited will slow down unless you go to a lower resolution.
|
443 |
+
|
444 |
+
The triangle rate requirements will be at least double Quake 2, and scalable to much higher levels of detail on appropriate hardware.
|
445 |
+
|
446 |
+
Here are my current GUESSES about how existing cards will perform.
|
447 |
+
|
448 |
+
Voodoo 1 Performance will be a little slow, but it should look good and run acceptably. You will have to use somewhat condensed textures to avoid texture thrashing.
|
449 |
+
|
450 |
+
Voodoo 2 Should run great. Getting the 12 mb board is probably a good idea if you want to use the high resolution textures. The main rendering mode won't be able to take advantage of the dual TMU the same way quake 2 does, so the extra TMU will be used for slightly higher quality rendering modes instead of greater speed: trilinear / detail texturing, or some full color effects where others get a mono channel.
|
451 |
+
|
452 |
+
Permedia 2 Will be completely fill rate bound, so it will basically run 2/3 the speed that quake 2 does. Not very fast. It also doesn't have one of the needed blending modes, so it won't look very good, either. P2 does support 24 bit rendering, but it won't be fast enough to use it.
|
453 |
+
|
454 |
+
ATI Rage Pro It looks like the rage pro has all the required blending modes, but the jury is still out on the performance.
|
455 |
+
|
456 |
+
Intel I740 Should run good with all features, and because all of the textures come out of AGP memory, there will be no texture thrashing at all, even with the full resolution textures.
|
457 |
+
|
458 |
+
Rendition 2100/2200 The 2100 should run about the speed of a voodoo 1, and the 2200 should be faster. They support all the necessary features, and an 8 mb 2200 should be able to use the high res textures without a problem. The renditions are the only current boards that can do 24 bit rendering with all the features. It will be a bit slow in 24 bit mode, but it will look the best.
|
459 |
+
|
460 |
+
PVR PCX2 Probably won't run Quake 3. They don't have ANY of the necessary blending modes, so it can't look correct. Video Logic might decide to rev their minidriver to try to support it, but it is probably futile.
|
461 |
+
|
462 |
+
RIVA 128 Riva puts us in a bad position. They are very fast, but they don't support an important feature. We can crutch it up by performing some extra drawing passes, but there is a bit of a quality loss, and it will impact their speed. They will probably be a bit faster than voodoo 1, but not to the degree that they are in Quake 2.
|
463 |
+
|
464 |
+
Naturally, the best cards are yet to come (I won't comment on unreleased cards). The graphics engine is being designed to be scalable over the next few YEARS, so it might look like we are shooting a bit high for the first release, but by the time it actually ships, there will be a lot of people with brand new accelerators that won't be properly exploited by any other game.
|
465 |
+
|
466 |
+
|
467 |
+
-----------------------------------------
|
468 |
+
John Carmack's .plan for Mar 20, 1998
|
469 |
+
-----------------------------------------
|
470 |
+
|
471 |
+
Robert Duffy, the maintainer of Radiant QE4 is now "officially" in charge of further development of the editor codebase. He joins Zoid as a (part time) contractor for us.
|
472 |
+
|
473 |
+
A modified version of Radiant will be the level editor for Quake 3. The primary changes will be support for curved surfaces and more general surface shaders. All changes will be publicly released, either after Q3 ships or possibly at the release of Q3Test, depending on how things are going.
|
474 |
+
|
475 |
+
The other major effort is to get Radiant working properly on all of the 3D cards that are fielding full OpenGL ICDs. If you want to do level development, you should probably get an 8mb video card. Permedia II cards have been the mainstay for developers that can't afford intergraph systems, but 8mb rendition v2200 (thriller 3D) cards are probably a better bet as soon as their ICD gets all the bugs worked out.
|
476 |
+
|
477 |
+
|
478 |
+
-----------------------------------------
|
479 |
+
John Carmack's .plan for Mar 21, 1998
|
480 |
+
-----------------------------------------
|
481 |
+
|
482 |
+
I just shut down the last of the NEXTSTEP systems running at id.
|
483 |
+
|
484 |
+
We hadn't really used them for much of anything in the past year, so it was just easier to turn them off than to continue to administer them.
|
485 |
+
|
486 |
+
Most of the intel systems had already been converted to NT or 95, and Onethumb gets all of our old black NeXT hardware, but we have four nice HP 712/80 workstations that can't be used for much of anything.
|
487 |
+
|
488 |
+
If someone can put these systems to good use (a dallas area unix hacker), you can have them for free. As soon as they are spoken for, I will update my .plan, so check immediately before sending me email.
|
489 |
+
|
490 |
+
You have to come by our office (in Mesquite) and do a fresh OS install here before you can take one. There may still be junk on the HD, and I can't spend the time to clean them myself. You can run either NEXTSTEP 3.3 or HP/UX. These are NOT intel machines, so you can't run dos or windows. I have NS CD's here, but I can't find the original HP/UX CDs. Bring your own if that's what you want.
|
491 |
+
|
492 |
+
I'm a bit nostalgic about the NeXT systems -- the story in the Id Anthology is absolutely true: I walked through a mile of snow to the bank to pay for our first workstation. For several years, I considered it the best development environment around. It still has advantages today, but you can't do any accelerated 3D work on it.
|
493 |
+
|
494 |
+
I had high hopes for rhapsody, but even on a top of the line PPC, it felt painfully sluggish compared to the NT workstations I use normally, and apple doesn't have their 3D act together at all.
|
495 |
+
|
496 |
+
Its kind of funny, but even through all the D3D/OpenGL animosity, I think Windows NT is the best place to do 3D graphics development.
|
497 |
+
|
498 |
+
All gone!
|
499 |
+
--------------
|
500 |
+
|
501 |
+
Paul Magyar gets the last (slightly broken) one.
|
502 |
+
|
503 |
+
Bob Farmer gets the third.
|
504 |
+
|
505 |
+
Philip Kizer gets the second one.
|
506 |
+
|
507 |
+
Kyle Bousquet gets the first one.
|
508 |
+
|
509 |
+
|
510 |
+
3/21 pt 2
|
511 |
+
---------
|
512 |
+
I haven't given up on rhapsody yet. I will certainly be experimenting with the release version when it ships, but I have had a number of discouraging things happen. Twice I was going to go do meetings at apple with all relevent people, but the people setting it up would get laid off before the meetings happened. Several times I would hear encouraging rumors about various things, but they never panned out. We had some biz discussions with apple about rhapsody, but they were so incredibly cautious about targeting rhapsody for consumer apps at the expense of macos that I doubted their resolve.
|
513 |
+
|
514 |
+
I WANT to help. Maybe post-E3 we can put something together.
|
515 |
+
|
516 |
+
The SGI/microsoft deal fucked up a lot of the 3D options. The codebase that everyone was using to develop OpenGL ICDs is now owned by microsoft, so it is unlikely any of them will ever be allowed to port to rhapsody (or linux, or BeOS).
|
517 |
+
|
518 |
+
That is one of the things I stress over -- The Right Thing is clear, but its not going to happen because of biz moves. It would be great if ATI, which has video drivers for win, rhapsody, linux, and BeOS, could run the same ICD on all those platforms.
|
519 |
+
|
520 |
+
|
521 |
+
-----------------------------------------
|
522 |
+
John Carmack's .plan for Mar 26, 1998
|
523 |
+
-----------------------------------------
|
524 |
+
|
525 |
+
I haven't even seen the "BeOS port of Quake". Stop emailing me about aproving it. I told one of the Lion developers he could port it to BeOS in his spare time, but I haven't seen any results from it.
|
526 |
+
|
527 |
+
-
|
528 |
+
|
529 |
+
There is a public discussion / compilation going on at openquake for suggestions to improve technical aspects of quake 3:
|
530 |
+
|
531 |
+
http://www.openquake.org/q3suggest/
|
532 |
+
|
533 |
+
This is sooo much better than just dropping me an email when a thought hits you. There are many, many thousands of you out there, and there needs to be some filtering process so we can get the information efficiently.
|
534 |
+
|
535 |
+
We will read and evaluate everything that makes it through the discussion process. There are two possible reasons why features don't make it into our games - either we decide that the effort is better spent elsewhere, or we just don't think about it. Sometimes the great ideas are completely obvious when suggested, but were just missed. That is what I most hope to see.
|
536 |
+
|
537 |
+
When the suggestions involve engineering tradeoffs and we have to consider the implementation effort of a feature vs its benefits, the best way to convince us to pursue it is to specify EXACTLY what benefits would be gained by undertaking the work, and specifying a clean interface to the feature from the file system data and the gamex86 code.
|
538 |
+
|
539 |
+
We hack where necessary, but I am much more willing to spend my time on an elegant extension that has multiple uses, rather than adding api bulk for specific features. Point out things that are clunky and inelegant in the current implementation. Even if it doesn't make any user visible difference, restructuring api for cleanliness is still a worthwhile goal.
|
540 |
+
|
541 |
+
We have our own ideas about game play features, so we may just disagree with you. Even if you-and-all-your-friends are SURE that your suggestions will make the game a ton better, we may not think it fits with our overall direction. We aren't going to be all things to all people, and we don't design by committee.
|
542 |
+
|
543 |
+
|
544 |
+
-----------------------------------------
|
545 |
+
John Carmack's .plan for Apr 02, 1998
|
546 |
+
-----------------------------------------
|
547 |
+
|
548 |
+
Drag strip day!
|
549 |
+
|
550 |
+
Most of the id guys, John Romero from ION, and George and Alan from 3drealms headed to the Ennis dragstrip today.
|
551 |
+
|
552 |
+
Nobody broke down, and some good times were posted.
|
553 |
+
|
554 |
+
11.9 @ 122 John Carmack F40
|
555 |
+
12.2 @ 122 George Broussard custom turbo 911
|
556 |
+
12.4 @ 116 Brian Hook Viper GTS
|
557 |
+
13.4 @ 106 John Romero custom turbo testarossa
|
558 |
+
13.6 @ 106 Todd Hollenshead 'vette
|
559 |
+
13.9 @ 100 Paul Steed 911
|
560 |
+
14.0 @ 99 Tim Willits 911
|
561 |
+
14.3 @ 101 Bear Turbo Supra
|
562 |
+
14.4 @ 98 Alan Blum turbo rx-7
|
563 |
+
14.7 @ 92 Brandon James M3
|
564 |
+
15.3 @ 92 Christian Boxster
|
565 |
+
15.5 @ 93 Jen (Hook's Chick) Turbo Volvo
|
566 |
+
16.1 @ 89 Ms. Donna Mustang GT
|
567 |
+
17.4 @ 82 Anna (Carmack's Chick) Honda Accord
|
568 |
+
18.1 @ 75 Jennifer (Jim Molinets' Chick) Saturn
|
569 |
+
|
570 |
+
We had three significant no-shows for various reasons: my TR, Adrian's viper, and Cash's supercharged M3 were all in the shop.
|
571 |
+
|
572 |
+
|
573 |
+
-----------------------------------------
|
574 |
+
John Carmack's .plan for Apr 08, 1998
|
575 |
+
-----------------------------------------
|
576 |
+
|
577 |
+
Things are progressing reasonably well on the Quake 3 engine.
|
578 |
+
|
579 |
+
Not being limited to supporting a 320*240 256 color screen is very, very nice, and will make everyone's lives a lot easier.
|
580 |
+
|
581 |
+
All of our new source artwork is being done in 24 bit TGA files, but the engine will continue to load .wal files and .pcx files for developer's convenience. Each pcx can have its own palette now though, because it is just converted to 24 bit at load time.
|
582 |
+
|
583 |
+
Q3 is going to have a fixed virtual screen coordinate system, independant of resolution. I tried that back in the original glquake, but the fixed coordinate system was only 320*200, which was excessively low. Q2 went with a dynamic layout at different resolutions, which was a pain, and won't scale to the high resolutions that very fast cards will be capable of running at next year.
|
584 |
+
|
585 |
+
All screen drawing is now done assuming the screen is 640*480, and everything is just scaled as you go higher or lower. This makes laying out status bars and HUDs a ton easier, and will let us do a lot cooler looking screens.
|
586 |
+
|
587 |
+
There will be an interface to let game dlls draw whatever they want on the screen, precisely where they want it. You can suck up a lot of network bandwidth doing that though, so some care will be needed.
|
588 |
+
|
589 |
+
-
|
590 |
+
|
591 |
+
Going to the completely opposite end of the hardware spectrum from quake 3..
|
592 |
+
|
593 |
+
I have been very pleased with the fallout from the release of the DOOM source code.
|
594 |
+
|
595 |
+
At any given spot in design space, there are different paths you can take to move forward. I have usually chosen to try to make a large step to a completely new area, but the temptation is there to just clean up and improve in the same area, continuously polishing the same program.
|
596 |
+
|
597 |
+
I am enjoying seeing several groups pouring over DOOM, fixing it up and enhancing it. Cleaning up long standing bugs. Removing internal limitations. Orthogonalizing feature sets. Etc.
|
598 |
+
|
599 |
+
The two that I have been following closest are Team TNT's BOOM engine project, which is a clear headed, well engineered improvement on the basic DOOM technical decisions, and Bruce Lewis' glDoom project.
|
600 |
+
|
601 |
+
Any quakers feeling nostalgic should browse around:
|
602 |
+
|
603 |
+
http://www.doomworld.com/
|
604 |
+
|
605 |
+
|
606 |
+
-----------------------------------------
|
607 |
+
John Carmack's .plan for Apr 16, 1998
|
608 |
+
-----------------------------------------
|
609 |
+
|
610 |
+
F40 + $465,000 = F50
|
611 |
+
|
612 |
+
|
613 |
+
-----------------------------------------
|
614 |
+
John Carmack's .plan for Apr 17, 1998
|
615 |
+
-----------------------------------------
|
616 |
+
|
617 |
+
Yes, I bought an F50. No, I don't want a McLaren.
|
618 |
+
|
619 |
+
We will be going back to the dragstrip in a couple weeks, and I will be exercising both the F50 and the TR there. Cash's supercharged M3 will probably show some of the porsches a thing or two, as well.
|
620 |
+
|
621 |
+
I'll probably rent a road coarse sometime soon, but I'm not in too much of a hurry to run the F50 into the weeds.
|
622 |
+
|
623 |
+
My TR finally got put back together after a terrific nitrous explosion just before the last dragstrip. It now makes 1000.0 hp at the rear wheels. Contrast that with the 415 rear wheel hp that the F40 made. Of course, a loaded testarossa does weigh about 4000 lbs..
|
624 |
+
|
625 |
+
My project car is somewhat nearing completion. My mechanic says it will be running in six weeks, but mechanics can be even more optimistic than software developers. :) I'm betting on fall. It should really be something when completed: a carbon fiber bodied ferrari GTO with a custom, one-of-a kind billet alluminum 4 valve DOHC 5.2L V12 with twin turbos running around 30 lbs of boost. It should be good for quite a bit more hp than my TR, and the entire car will only weigh 2400 lbs.
|
626 |
+
|
627 |
+
---
|
628 |
+
|
629 |
+
The distance between a cool demo and production code is vast. Two months ago, I had some functional demos of several pieces of the Quake 3 rendering tech, but today it still isn't usable as a full replacement for ref_gl yet.
|
630 |
+
|
631 |
+
Writing a modern game engine is a lot of work.
|
632 |
+
|
633 |
+
The new architecture is turning out very elegent. Not having to support software rendering or color index images is helping a lot, but it is also nice to reflect on just how much I have learned in the couple years since the original Quake renderer was written.
|
634 |
+
|
635 |
+
My C coding style has changed for Quake 3, which is going to give me a nice way of telling at a glance which code I have or haven't touched since Quake 2. In fact, there have been enough evolutions in my style that you can usually tell what year I wrote a piece of code by just looking at a single function:
|
636 |
+
|
637 |
+
[code]
|
638 |
+
/*
|
639 |
+
=============
|
640 |
+
=
|
641 |
+
= Function headers like this are DOOM or earlier
|
642 |
+
=
|
643 |
+
=============
|
644 |
+
*/
|
645 |
+
|
646 |
+
/*
|
647 |
+
=============
|
648 |
+
Function Headers like this are Quake or later
|
649 |
+
=============
|
650 |
+
*/
|
651 |
+
|
652 |
+
{
|
653 |
+
// comments not indented were written on NEXTSTEP
|
654 |
+
// (quake 1)
|
655 |
+
|
656 |
+
// indented comments were written on
|
657 |
+
// Visual C++ (glquake / quakeworld, quake2)
|
658 |
+
}
|
659 |
+
|
660 |
+
for (testnum=0 ; testnum<4 ; testnum++)
|
661 |
+
{ // older coding style
|
662 |
+
}
|
663 |
+
|
664 |
+
for (testNumber = 0 ; testNumber < 4 ; testNumber++) {
|
665 |
+
// quake 3 coding style
|
666 |
+
}
|
667 |
+
[/code]
|
668 |
+
|
669 |
+
|
670 |
+
-----------------------------------------
|
671 |
+
John Carmack's .plan for Apr 22, 1998
|
672 |
+
-----------------------------------------
|
673 |
+
|
674 |
+
F50 pros and cons vs F40:
|
675 |
+
|
676 |
+
The front and rear views are definately cooler on the F50, but I think I like the F40 side view better. I haven't taken the top off the F50 yet, though (its supposed to be a 40 minute job..).
|
677 |
+
|
678 |
+
Adjustable front suspension. Press a button and it raises two inches, which means you can actually drive it up into strip malls. The F40 had to be driven into my garage at an angle to keep the front from rubbing. This makes the car actually fairly practical for daily driving.
|
679 |
+
|
680 |
+
Drastically better off idle torque. You have to rev the F40 a fair amount to even get it moving, and if you are moving at 2000 rpm in first gear, a honda can pull away from you until it starts making boost at 3500 rpm. The f50 has enough torque that you don't even need to rev to get moving, and it goes quite well by just flooring it after you are moving. No need to wreck a clutch by slipping it out from 4000 rpm.
|
681 |
+
|
682 |
+
Much nicer clutch. The F40 clutch was a very low-tech single disk clutch that required more effort than on my crazy TR with over twice the torque.
|
683 |
+
|
684 |
+
Better rearward visibility. The F40's lexan fastback made everything to your rear a blur.
|
685 |
+
|
686 |
+
Better shifting. A much smoother six speed than the F40's five speed.
|
687 |
+
|
688 |
+
Better suspension. Some bumps that would upset the F40 badly are handled without any problems.
|
689 |
+
|
690 |
+
Better aerodynamics. A flat underbody with tunnels is a good thing if you are going to be moving at very high speeds.
|
691 |
+
|
692 |
+
I beleive the F50 could probably lap a road coarse faster than the F40, but in a straight line, the F40 is faster. The F50 felt a fair amount slower, but I was chalking that up to the lack of non-linear turbo rush. Today I drove it down to the dyno and we got real numbers.
|
693 |
+
|
694 |
+
It only made 385 hp at the rear wheels, which is maybe 450 at the crank if you are being generous. The F40 made 415, but that was with the boost cranked up a bit over stock.
|
695 |
+
|
696 |
+
We're going to have to do something about that.
|
697 |
+
|
698 |
+
I'm thinking that a mild twin-turbo job will do the trick. Six pounds of boost should get it up to a health 500 hp at the rear wheels, which will keep me happy. I don't want to turn it into a science project like my TR, I just want to make sure it is well out of the range of any normal cars.
|
699 |
+
|
700 |
+
I may put that in line after my GTO gets finished.
|
701 |
+
|
702 |
+
|
703 |
+
-----------------------------------------
|
704 |
+
John Carmack's .plan for May 02, 1998
|
705 |
+
-----------------------------------------
|
706 |
+
|
707 |
+
The rcon backdoor was added to help the development of QuakeWorld (It is not present in Quake 1). At the time, attacking Quake servers with spoofed packets was not the popular sport it seems to have become with Quake 2, so I didn't think much about the potential for exploitation.
|
708 |
+
|
709 |
+
The many forced releases of Quake 2 due to hacker attacks has certainly taught me to be a lot more wary.
|
710 |
+
|
711 |
+
It was a convenient feature for us, but it turned out to be irresponsible. Sorry.
|
712 |
+
|
713 |
+
There will be new releases of QuakeWorld and Quake 2 soon.
|
714 |
+
|
715 |
+
|
716 |
+
-----------------------------------------
|
717 |
+
John Carmack's .plan for May 04, 1998
|
718 |
+
-----------------------------------------
|
719 |
+
|
720 |
+
Here are some notes on a few of the technologies that I researched in preparing for the Quake3/trinity engine. I got a couple months of pretty much wide open research done at the start, but it turned out that none of the early research actually had any bearing on the directions I finally decided on. Ah well, I learned a lot, and it will probably pay off at some later time.
|
721 |
+
|
722 |
+
I spent a little while doing some basic research with lummigraphs, which are sort of a digital hologram. The space requirements are IMMENSE, on the order of several gigs uncompressed for even a single full sized room. I was considering the possibility of using very small lumigraph fragments (I called them "lumigraphlets") as imposters for large clusters of areas, similar to aproximating an area with a texture map, but it would effectively be a view dependent texture.
|
723 |
+
|
724 |
+
The results were interesting, but transitioning seamlessly would be difficult, the memory was still large, and it has all the same caching issues that any impostor scheme has.
|
725 |
+
|
726 |
+
Another aproach I worked on was basically extending the sky box code style of rendering from quake 2 into a complete rendering system. Take a large number of environment map snapshots, and render a view by interpolating between up to four maps (if in a tetrahedral arangement) based on the view position.
|
727 |
+
|
728 |
+
A simple image based interpolating doesn't convey a sense of motion, because it basically just ghosts between seperate points unless the maps are VERY close together reletive to the nearest point visible in the images.
|
729 |
+
|
730 |
+
If the images that make up the environment map cube also contain depth values at some (generally lower) resolution, instead of rendering the environment map as six big flat squares at infinity, you can render it as a lot of little triangles at the proper world coordinates for the individual texture points. A single environment map like this can be walked around in and gives a sense of motion. If you have multiple maps from nearby locations, they can be easily blended together. Some effort should be made to nudge the mesh samples so that as many points are common between the maps as possible, but even a regular grid works ok.
|
731 |
+
|
732 |
+
You get texture smearing when occluded detail should be revealed, and if you move too far from the original camera point the textures blur out a lot, but it is still a very good effect, is completely complexity insensitive, and is aliasing free except when the view position causes a silhouette crease in the depth data.
|
733 |
+
|
734 |
+
Even with low res environment maps like in Quake2, each snapshot would consume 700k, so taking several hundred environment images throughout a level would generate too much data. Obviously there is a great deal of redundancy - you will have several environment maps that contain the same wall image, for instance. I had an interesting idea for compressing it all. If you ignore specular lighting and atmospheric effects, any surface that is visible in multiple environment maps can be represented by a single copy of it and perspective transformation of that image. Single image, transformations, sounds like.. fractal compression. Normal fractal compression only deals with affine maps, but the extension to projective maps seems logical.
|
735 |
+
|
736 |
+
I think that a certain type of game could be done with a technology like that, but in the end, I didn't think it was the right direction for a first person shooter.
|
737 |
+
|
738 |
+
There is a tie in between lummigraphs, multiple environment maps, specularity, convolution, and dynamic indirect lighting. Its nagging at me, but it hasn't come completely clear.
|
739 |
+
|
740 |
+
Other topics for when I get the time to write more:
|
741 |
+
|
742 |
+
Micro environment map based model lighting. Convolutions of environment maps by phong exponent, exponent of one with normal vector is diffuse lighting.
|
743 |
+
|
744 |
+
Full surface texture representation. Interior antaliasing with edge matched texels.
|
745 |
+
|
746 |
+
Octree represented surface voxels. Drawing and tracing.
|
747 |
+
|
748 |
+
Bump mapping, and why most of the aproaches being suggested for hardware are bogus.
|
749 |
+
|
750 |
+
Parametric patches vs implicit functions vs subdivision surfaces.
|
751 |
+
|
752 |
+
Why all analytical boundary representations basically suck.
|
753 |
+
|
754 |
+
Finite element radiosity vs photon tracing.
|
755 |
+
|
756 |
+
etc.
|
757 |
+
|
758 |
+
|
759 |
+
-----------------------------------------
|
760 |
+
John Carmack's .plan for May 17, 1998
|
761 |
+
-----------------------------------------
|
762 |
+
|
763 |
+
Here is an example of some bad programming in quake:
|
764 |
+
|
765 |
+
There are three places where text input is handled in the game: the console, the chat line, and the menu fields. They all used completely different code to manage the input line and display the output. Some allowed pasting from the system clipboard, some allowed scrolling, some accepted unix control character commands, etc. A big mess.
|
766 |
+
|
767 |
+
Quake 3 will finally have full support for international keyboards and character sets. This turned out to be a bit more trouble than expected because of the way Quake treated keys and characters, and it led to a rewrite of a lot of the keyboard handling, including the full cleanup and improvement of text fields.
|
768 |
+
|
769 |
+
A similar cleanup of the text printing hapened when Cash implemented general colored text: we had at least a half dozen different little loops to print strings with slightly different attributes, but now we have a generalized one that handles embedded color commands or force-to-color printing.
|
770 |
+
|
771 |
+
Amidst all the high end graphics work, sometimes it is nice to just fix up something elementary.
|
772 |
+
|
773 |
+
|
774 |
+
-----------------------------------------
|
775 |
+
John Carmack's .plan for May 19, 1998
|
776 |
+
-----------------------------------------
|
777 |
+
|
778 |
+
A 94 degree day at the dragstrip today. Several 3drealms and Norwood Autocraft folk also showed up to run. We got to weigh most of the cars on the track scales, which gives us a few more data points.
|
779 |
+
|
780 |
+
11.6 @ 125 Bob Norwood's ferrari P4 race car (2200 lbs)
|
781 |
+
11.9 @ 139 John Carmack's twin turbo testarossa (3815 lbs)
|
782 |
+
11.9 @ 117 Paul Steed's YZF600R bike
|
783 |
+
12.1 @ 122 John Carmack's F50 (3205 lbs)
|
784 |
+
12.3 @ 117 Brian's Viper GTS (3560 lbs)
|
785 |
+
13.7 @ 103 John Cash's supercharged M3
|
786 |
+
14.0 @ 96 Scott Miller's lexus GS400
|
787 |
+
15.0 @ ??? Someone's volkswagon GTI
|
788 |
+
15.1 @ ??? Christian's boxter (with Tim driving)
|
789 |
+
|
790 |
+
Weight is the key for good ETs. The TR has considerably better power to weight ratio than the P4, but it can't effectively use most of the power until it gets into third gear. The viper is actually making more power than the F50, (Brian got a big kick out of that after his dyno run) but 350 lbs more than compensated for it.
|
791 |
+
|
792 |
+
I wanted to hit 140 in the TR, but the clutch started slipping on the last run and I called it a day.
|
793 |
+
|
794 |
+
I was actually surprised the F50 ran 122 mph, which is the same the F40 did on a 25 degree cooler day. I was running with the top off, so it might even be capable of going a bit faster with it on.
|
795 |
+
|
796 |
+
The F50 and the viper were both very consistant performers, but the TR and the supercharged M3 were all over the place with their runs.
|
797 |
+
|
798 |
+
Brian nocked over a tenth off of his times even in spite of the heat, due to launch practice and some inlet modifications. He also power shifted on his best run.
|
799 |
+
|
800 |
+
It was pretty funny watching the little volkswagon consistantly beat up on a tire shredding trans-am.
|
801 |
+
|
802 |
+
George Broussard had his newly hopped up 911 turbo, but it broke the trans on its very first run. We were expecting him to be in the 11's.
|
803 |
+
|
804 |
+
We probably won't run again until either I get the F50 souped up, or my GTO gets finished.
|
805 |
+
|
806 |
+
|
807 |
+
-----------------------------------------
|
808 |
+
John Carmack's .plan for May 22, 1998
|
809 |
+
-----------------------------------------
|
810 |
+
|
811 |
+
Congratulations to Epic, Unreal looks very good.
|
812 |
+
|
813 |
+
|
814 |
+
-----------------------------------------
|
815 |
+
John Carmack's .plan for Jun 08, 1998
|
816 |
+
-----------------------------------------
|
817 |
+
|
818 |
+
I spent quite a while investigating the limits of input under windows recently. I foudn out a few interesting things:
|
819 |
+
|
820 |
+
Mouse sampling on win95 only happens every 25ms. It doesn't matter if you check the cursor or use DirectInput, the values will only change 40 times a second.
|
821 |
+
|
822 |
+
This means that with normal checking, the mouse control will feel slightly stuttery whenever the framerate is over 20 fps, because on some frames you will be getting one input sample, and on other frames you will be getting two. The difference between two samples and three isn't very noticable, so it isn't much of an issue below 20 fps. Above 40 fps it is a HUGE issue, because the frames will be bobbing between one sample and zero samples.
|
823 |
+
|
824 |
+
I knew there were some sampling quantization issues early on, so I added the "m_filter 1" variable, but it really wasn't an optimal solution. It averaged together the samples collected at the last two frames, which worked out ok if the framerate stayed consistantly high and you were only averaging together one to three samples, but when the framerate dropped to 10 fps or so, you wound up averaging together a dozen more samples than were really needed, giving the "rubber stick" feel to the mouse control.
|
825 |
+
|
826 |
+
I now have three modes of mouse control:
|
827 |
+
|
828 |
+
in_mouse 1: Mouse control with standard win-32 cursor calls, just like Quake 2.
|
829 |
+
|
830 |
+
in_mouse 2: Mouse control using DirectInput to sample the mouse relative counters each frame. This behaves like winquake with -dinput. There isn't a lot of difference between this and 1, but you get a little more precision, and you never run into window clamping issues. If at some point in the future microsoft changes the implementation of DirectInput so that it processes all pending mouse events exactly when the getState call happens, this will be the ideal input mode.
|
831 |
+
|
832 |
+
in_mouse 3: Processes DirectInput mouse movement events, and filters the amount of movement over the next 25 milliseconds. This effectively adds about 12 ms of latency to the mouse control, but the movement is smooth and consistant at any variable frame rate. This will be the default for Quake 3, but some people may want the 12ms faster (but rougher) response time of mode 2.
|
833 |
+
|
834 |
+
It takes a pretty intense player to even notice the difference in most cases, but if you have a setup that can run a very consistant 30 fps you will probably apreciate the smoothness. At 60 fps, anyone can tell the difference, but rendering speeds will tend to cause a fair amount of jitter at those rates no matter what the mouse is doing.
|
835 |
+
|
836 |
+
DirectInput on WindowsNT does not log mouse events as they happen, but seems to just do a poll when called, so they can't be filtered properly.
|
837 |
+
|
838 |
+
Keyboard sampling appears to be millisecond precise on both OS, though.
|
839 |
+
|
840 |
+
In doing this testing, it has become a little bit more tempting to try to put in more leveling optimizations to allow 60 hz framerates on the highest end hardware, but I have always shied away from targeting very high framerates as a goal, because when you miss by a tiny little bit, the drop from 60 to 30 ( 1 to 2 vertical retraces ) fps is extremely noticable.
|
841 |
+
|
842 |
+
-
|
843 |
+
|
844 |
+
I have also concluded that the networking architecture for Quake 2 was just not the right thing. The interpolating 10 hz server made a lot of animation easier, which fit with the single player focus, but it just wasn't a good thing for internet play.
|
845 |
+
|
846 |
+
Quake 3 will have an all new entity communication mechanism that should be solidly better than any previous system. I have some new ideas that go well beyond the previous work that I did on QuakeWorld.
|
847 |
+
|
848 |
+
Its tempting to try to roll the new changes back into Quake 2, but a lot of them are pretty fundamental, and I'm sure we would bust a lot of important single player stuff while gutting the network code.
|
849 |
+
|
850 |
+
(Yes, we made some direction changes in Quake 3 since the original announcement when it was to be based on the Quake 2 game and networking with just a new graphics engine)
|
851 |
+
|
852 |
+
|
853 |
+
-----------------------------------------
|
854 |
+
John Carmack's .plan for Jun 16, 1998
|
855 |
+
-----------------------------------------
|
856 |
+
|
857 |
+
My last two .plan updates have described efforts that were not in our original plan for quake 3, which was "quake 2 game and network technology with a new graphics engine".
|
858 |
+
|
859 |
+
We changed our minds.
|
860 |
+
|
861 |
+
The new product is going to be called "Quake Arena", and will consist exclusively of deathmatch style gaming (including CTF and other derivatives). The single player game will just be a progression through a ranking ladder against bot AIs. We think that can still be made an enjoyable game, but it is definately a gamble.
|
862 |
+
|
863 |
+
In the past, we have always been designing two games at once, the single player game and the multi player game, and they often had conflicting goals. For instance, the client-server communications channel discouraged massive quantities of moving entities that would have been interesting in single player, while the maps and weapons designed for single player were not ideal for multiplayer. The largest conflict was just raw development time. Time spent on monsters is time not spent on player movement. Time spent on unit goals is time not spent on game rules.
|
864 |
+
|
865 |
+
There are many wonderful gaming experiences in single player FPS, but we are choosing to leave them behind to give us a purity of focus that will let us make significant advances in the multiplayer experience.
|
866 |
+
|
867 |
+
The emphasis will be on making every aspect as robust and high quality as possible, rather than trying to add every conceivable option anyone could want. We will not be trying to take the place of every mod ever produced, but we hope to satisfy a large part of the network gaming audience with the out of box experience.
|
868 |
+
|
869 |
+
There is a definite effect on graphics technology decisions. Much of the positive feedback in a single player FPS is the presentation of rich visual scenes, which are often at the expense of framerate. A multiplayer level still needs to make a good first impression, but after you have seen it a hundred times, the speed of the game is more important. This means that there are many aggressive graphics technologies that I will not pursue because they are not apropriate to the type of game we are creating.
|
870 |
+
|
871 |
+
The graphics engine will still be OpenGL only, with significant new features not seen anywhere before, but it will also have fallback modes to render at roughly Quake-2 quality and speed.
|
872 |
+
|
873 |
+
|
874 |
+
-----------------------------------------
|
875 |
+
John Carmack's .plan for Jul 04, 1998
|
876 |
+
-----------------------------------------
|
877 |
+
|
878 |
+
Here is the real story on the movement physics changes.
|
879 |
+
|
880 |
+
Zoid changed the movement code in a way that he felt improved gameplay in the 3.15 release.
|
881 |
+
|
882 |
+
We don't directly supervise most of the work Zoid does. One of the main reasons we work with him is that I respect his judgment, and I feel that his work benefits the community quite a bit with almost no effort on my part. If I had to code review every change he made, it wouldn't be worth the effort.
|
883 |
+
|
884 |
+
Zoid has "ownership" of the Quake, Glquake, and QuakeWorld codebases. We don't intend to do any more modifications at Id on those sources, so he has pretty free reign within his discretion.
|
885 |
+
|
886 |
+
We passed over the Quake 2 codebase to him for the addition of new features like auto download, but it might have been a bit premature, because official mission packs were still in development, and unlike glquake and quakeworld, Q2 is a product that must remain official and supported, so the scope of his freedoms should have been spelled out a little more clearly.
|
887 |
+
|
888 |
+
The air movement code wasn't a good thing to change in Quake 2, because the codebase still had to support all the commercial single player levels, and subtle physics changes can have lots of unintended effects.
|
889 |
+
|
890 |
+
QuakeWorld didn't support single player maps, so it was a fine place to experiment with physics changes.
|
891 |
+
|
892 |
+
QuakeArena is starting with fresh new data, so it is also a good place to experiment with physics changes.
|
893 |
+
|
894 |
+
Quake 2 cannot be allowed to evolve in a way that detracts from the commercial single player levels.
|
895 |
+
|
896 |
+
The old style movement should not be referred to as "real world physics". None of the quake physics are remotely close to real world physics, so I don't think one way is significantly more "real" than the other. In Q2, you accelerate from 0 to 27 mph in 1/30 of a second, which just as unrealistic as being able to accelerate in midair..
|
897 |
+
|
898 |
+
|
899 |
+
-----------------------------------------
|
900 |
+
John Carmack's .plan for Jul 05, 1998
|
901 |
+
-----------------------------------------
|
902 |
+
|
903 |
+
I am not opposed to adding a flag to control the movement styles. I was rather expecting it to be made optional in 3.17, but I haven't been directly involved in the last few releases.
|
904 |
+
|
905 |
+
The way this played out in public is a bit unfortunate. Everyone at Id is busy full time with the new product, so we just weren't paying enough attention to the Quake 2 modifications. Some people managed to read into my last update that we were blaming Zoid for things. Uh, no. I think he was acting within his charter (catering to the community) very well, it just interfered with an aspect of the game that shouldn't have been modified. We just never made it explicitly clear that it shouldn't have been modified.
|
906 |
+
|
907 |
+
It is a bit amusing how after the QuakeArena anouncement, I got flamed by lots of people for abandoning single player play (even though we aren't, really) but after I say that Quake 2 can't forget that it is a single player game, I get flamed by a different set of people who think it is stupid to care about single player anymore when all "everyone" plays is multiplayer. The joy of having a wide audience that knows your email address.
|
908 |
+
|
909 |
+
|
910 |
+
-----------------------------------------
|
911 |
+
John Carmack's .plan for Jul 16, 1998
|
912 |
+
-----------------------------------------
|
913 |
+
|
914 |
+
I have spent the last two days working with Apple's Rhapsody DR2, and I like it a lot.
|
915 |
+
|
916 |
+
I was dissapointed with the original DR1 release. It was very slow and seemed to have added the worst elements of the mac experience (who the hell came up with that windowshade minimizing?) while taking away some of the strengths of NEXTSTEP.
|
917 |
+
|
918 |
+
Things are a whole lot better in the latest release. General speed is up, memory consumption is down, and the UI feels consistant and productive.
|
919 |
+
|
920 |
+
Its still not as fast as windows, and probably never will be, but I think the tradeoffs are valid.
|
921 |
+
|
922 |
+
There are so many things that are just fundamentally better in the rhapsody design than in windows: frameworks, the yellow box apis, fat binaries, buffered windows, strong multi user support, strong system / local seperation, netinfo, etc.
|
923 |
+
|
924 |
+
Right now, I think WindowsNT is the best place to do graphics development work, but if the 3D acceleration issue was properly addressed on rhapsody, I think that I could be happy using it as my primary development platform.
|
925 |
+
|
926 |
+
I ported the current Quake codebase to rhapsody to test out conix's beta OpenGL. The game isn't really playable with the software emulated OpenGL, but it functions properly, and it makes a fine dedicated server.
|
927 |
+
|
928 |
+
We are going to try to stay on top of the portability a little better for QA. Quake 2 slid a bit because we did the development on NT instead of NEXTSTEP, and that made the irix port a lot more of a hassle than the original glquake port.
|
929 |
+
|
930 |
+
I plan on using the rhapsody system as a dedicated server during development, and Brian will be using an Alpha-NT system for a lot of testing, which should give us pretty good coverage of the portability issues.
|
931 |
+
|
932 |
+
I'm supposed to go out and have a bunch of meetings at apple next month to cover games, graphics, and hardware. Various parts of apple have scheduled meetings with me on three seperate occasions over the past couple years, but they have always been canceled for one reason or another (they laid off the people I was going to meet with once..).
|
933 |
+
|
934 |
+
I have said some negative things about MacOs before, but my knowledge of the mac is five years old. There was certainly the possibility that things had improved since then, so I spent some time browsing mac documentation recently. I was pretty amused. A stack sniffer. Patching trap vectors. Cooperative multitasking. Application memory partitions. Heh.
|
935 |
+
|
936 |
+
I'm scared of MacOS X. As far as I can tell, The basic plan is to take rhapsody and bolt all the MacOS APIs into the kernel. I understand that that may well be a sensible biz direction, but I fear it.
|
937 |
+
|
938 |
+
In other operating system news, Be has glquake running hardware accelerated on their upcoming OpenGL driver architecture. I gave them access to the glquake and quake 2 codebases for development purposes, and I expect we will work out an agreement for distribution of the ports.
|
939 |
+
|
940 |
+
Any X server vendors working on hardware accelerated OpenGL should get in touch with Zoid about interfacing and tuning with the Id OpenGL games on linux.
|
941 |
+
|
942 |
+
|
943 |
+
-----------------------------------------
|
944 |
+
John Carmack's .plan for Jul 29, 1998
|
945 |
+
-----------------------------------------
|
946 |
+
|
947 |
+
My F50 took some twin turbo vitamins.
|
948 |
+
|
949 |
+
Rear wheel numbers: 602 HP @ 8200 rpm 418 ft-lb @ 7200 rpm
|
950 |
+
|
951 |
+
This is very low boost, but I got the 50% power increase I was looking for, and hopefully it won't be making any contributions to my piston graveyard.
|
952 |
+
|
953 |
+
There will be an article in Turbo magazine about it, and several other car magazines want to test it out. They usually start out with "He did WHAT to an F50???" :)
|
954 |
+
|
955 |
+
Brian is getting a nitrous kit installed in his viper, and Cash just got his suspension beefed up, so we will be off to the dragstrip again next month to sort everything out again.
|
956 |
+
|
957 |
+
|
958 |
+
-----------------------------------------
|
959 |
+
John Carmack's .plan for Aug 17, 1998
|
960 |
+
-----------------------------------------
|
961 |
+
|
962 |
+
I added support for HDTV style wide screen displays in QuakeArena, so 24" and 28" monitors can now cover the entire screen with game graphics.
|
963 |
+
|
964 |
+
On a normal 4:3 aspect ratio screen, a 90 degree horizontal field of view gives a 75 degree vertical field of view. If you keep the vertical fov constant and run on a wide screen, you get a 106 degree horizontal fov.
|
965 |
+
|
966 |
+
Because we specify fov with the horizontal measurement, you need to change fov when going into or out of a wide screen mode. I am considering changing fov to be the vertical measurement, but it would probably cause a lot of confusion if "fov 90" becomes a big fisheye.
|
967 |
+
|
968 |
+
Many video card drivers are supporting the ultra high res settings like 1920 * 1080, but hopefully they will also add support for lower settings that can be good for games, like 856 * 480.
|
969 |
+
|
970 |
+
I spent a day out at apple last week going over technical issues.
|
971 |
+
|
972 |
+
I'm feeling a lot better about MacOS X. Almost everything I like about rhapsody will be there, plus some solid additions.
|
973 |
+
|
974 |
+
I presented the OpenGL case directly to Steve Jobs as strongly as possible.
|
975 |
+
|
976 |
+
If Apple embraces OpenGL, I will be strongly behind them. I like OpenGL more than I dislike MacOS. :)
|
977 |
+
|
978 |
+
-
|
979 |
+
|
980 |
+
Last friday I got a phone call: "want to make some exhibition runs at the import / domestic drag wars this sunday?". It wasn't particularly good timing, because the TR had a slipping clutch and the F50 still hasn't gotten its computer mapping sorted out, but we got everything functional in time.
|
981 |
+
|
982 |
+
The tech inspector said that my cars weren't allowed to run in the 11s at the event because they didn't have roll cages, so I was supposed to go easy.
|
983 |
+
|
984 |
+
The TR wasn't running its best, only doing low 130 mph runs. The F50 was making its first sorting out passes at the event, but it was doing ok. My last pass was an 11.8(oops) @ 128, but we still have a ways to go to get the best times out of it.
|
985 |
+
|
986 |
+
I'm getting some racing tires on the F50 before I go back. It sucked watching a tiny honda race car jump ahead of me off the line. :)
|
987 |
+
|
988 |
+
I think ESPN took some footage at the event.
|
989 |
+
|
990 |
+
|
991 |
+
-----------------------------------------
|
992 |
+
John Carmack's .plan for Sep 08, 1998
|
993 |
+
-----------------------------------------
|
994 |
+
|
995 |
+
I just got a production TNT board installed in my Dolch today.
|
996 |
+
|
997 |
+
The riva-128 was a troublesome part. It scored well on benchmarks, but it had some pretty broken aspects to it, and I never recommended it (you are better off with an intel I740).
|
998 |
+
|
999 |
+
There aren't any troublesome aspects to TNT. Its just great. Good work, Nvidia.
|
1000 |
+
|
1001 |
+
In terms of raw speed, a 16 bit color multitexture app (like quake / quake 2) should still run a bit faster on a voodoo2, and an SLI voodoo2 should be faster for all 16 bit color rendering, but TNT has a lot of other things going for it:
|
1002 |
+
|
1003 |
+
32 bit color and 24 bit z buffers. They cost speed, but it is usually a better quality tradeoff to go one resolution lower but with twice the color depth.
|
1004 |
+
|
1005 |
+
More flexible multitexture combine modes. Voodoo can use its multitexture for diffuse lightmaps, but not for the specular lightmaps we offer in QuakeArena. If you want shiny surfaces, voodoo winds up leaving half of its texturing power unused (you can still run with diffuse lightmaps for max speed).
|
1006 |
+
|
1007 |
+
Stencil buffers. There aren't any apps that use it yet, but stencil allows you to do a lot of neat tricks.
|
1008 |
+
|
1009 |
+
More texture memory. Even more than it seems (16 vs 8 or 12), because all of the TNT's memory can be used without restrictions. Texture swapping is the voodoo's biggest problem.
|
1010 |
+
|
1011 |
+
3D in desktop applications. There is enough memory that you don't have to worry about window and desktop size limits, even at 1280*1024 true color resolution.
|
1012 |
+
|
1013 |
+
Better OpenGL ICD. 3dfx will probably do something about that, though.
|
1014 |
+
|
1015 |
+
This is the shape of 3D boards to come. Professional graphics level rendering quality with great performance at a consumer price.
|
1016 |
+
|
1017 |
+
We will be releasing preliminary QuakeArena benchmarks on all the new boards in a few weeks. Quake 2 is still a very good benchmark for moderate polygon counts, so our test scenes for QA involve very high polygon counts, which stresses driver quality a lot more. There are a few surprises in the current timings..
|
1018 |
+
|
1019 |
+
-
|
1020 |
+
|
1021 |
+
A few of us took a couple days off in vegas this weekend. After about ten hours at the tables over friday and saturday, I got a tap on the shoulder..
|
1022 |
+
|
1023 |
+
Three men in dark suits introduced themselves and explained that I was welcome to play any other game in the casino, but I am not allowed to play blackjack anymore.
|
1024 |
+
|
1025 |
+
Ah well, I guess my blackjack days are over. I was actually down a bit for the day when they booted me, but I made +$32k over five trips to vegas in the past two years or so.
|
1026 |
+
|
1027 |
+
I knew I would get kicked out sooner or later, because I don't play "safely". I sit at the same table for several hours, and I range my bets around 10 to 1.
|
1028 |
+
|
1029 |
+
|
1030 |
+
-----------------------------------------
|
1031 |
+
John Carmack's .plan for Sep 10, 1998
|
1032 |
+
-----------------------------------------
|
1033 |
+
|
1034 |
+
I recently set out to start implementing the dual-processor acceleration for QA, which I have been planning for a while. The idea is to have one processor doing all the game processing, database traversal, and lighting, while the other processor does absolutely nothing but issue OpenGL calls.
|
1035 |
+
|
1036 |
+
This effectively treats the second processor as a dedicated geometry accelerator for the 3D card. This can only improve performance if the card isn't the bottleneck, but voodoo2 and TNT cards aren't hitting their limits at 640*480 on even very fast processors right now.
|
1037 |
+
|
1038 |
+
For single player games where there is a lot of cpu time spent running the server, there could conceivably be up to an 80% speed improvement, but for network games and timedemos a more realistic goal is a 40% or so speed increase. I will be very satisfied if I can makes a dual pentium-pro 200 system perform like a pII-300.
|
1039 |
+
|
1040 |
+
I started on the specialized code in the renderer, but it struck me that it might be possible to implement SMP acceleration with a generic OpenGL driver, which would allow Quake2 / sin / halflife to take advantage of it well before QuakeArena ships.
|
1041 |
+
|
1042 |
+
It took a day of hacking to get the basic framework set up: an smpgl.dll that spawns another thread that loads the original oepngl32.dll or 3dfxgl.dll, and watches a work que for all the functions to call.
|
1043 |
+
|
1044 |
+
I get it basically working, then start doing some timings. Its 20% slower than the single processor version.
|
1045 |
+
|
1046 |
+
I go in and optimize all the queing and working functions, tune the communications facilities, check for SMP cache collisions, etc.
|
1047 |
+
|
1048 |
+
After a day of optimizing, I finally squeak out some performance gains on my tests, but they aren't very impressive: 3% to 15% on one test scene, but still slower on the another one.
|
1049 |
+
|
1050 |
+
This was fairly depressing. I had always been able to get pretty much linear speedups out of the multithreaded utilities I wrote, even up to sixteen processors. The difference is that the utilities just split up the work ahead of time, then don't talk to each other until they are done, while here the two threads work in a high bandwidth producer / consumer relationship.
|
1051 |
+
|
1052 |
+
I finally got around to timing the actual communication overhead, and I was appalled: it was taking 12 msec to fill the que, and 17 msec to read it out on a single frame, even with nothing else going on. I'm surprised things got faster at all with that much overhead.
|
1053 |
+
|
1054 |
+
The test scene I was using created about 1.5 megs of data to relay all the function calls and vertex data for a frame. That data had to go to main memory from one processor, then back out of main memory to the other. Admitedly, it is a bitch of a scene, but that is where you want the acceleration..
|
1055 |
+
|
1056 |
+
The write times could be made over twice as fast if I could turn on the PII's write combining feature on a range of memory, but the reads (which were the gating factor) can't really be helped much.
|
1057 |
+
|
1058 |
+
Streaming large amounts of data to and from main memory can be really grim. The next write may force a cache writeback to make room for it, then the read from memory to fill the cacheline (even if you are going to write over the entire thing), then eventually the writeback from the cache to main memory where you wanted it in the first place. You also tend to eat one more read when your program wants to use the original data that got evicted at the start.
|
1059 |
+
|
1060 |
+
What is really needed for this type of interface is a streaming read cache protocol that performs similarly to the write combining: three dedicated cachelines that let you read or write from a range without evicting other things from the cache, and automatically prefetching the next cacheline as you read.
|
1061 |
+
|
1062 |
+
Intel's write combining modes work great, but they can't be set directly from user mode. All drivers that fill DMA buffers (like OpenGL ICDs..) should definately be using them, though.
|
1063 |
+
|
1064 |
+
Prefetch instructions can help with the stalls, but they still don't prevent all the wasted cache evictions.
|
1065 |
+
|
1066 |
+
It might be possible to avoid main memory alltogether by arranging things so that the sending processor ping-pongs between buffers that fit in L2, but I'm not sure if a cache coherent read on PIIs just goes from one L2 to the other, or if it becomes a forced memory transaction (or worse, two memory transactions). It would also limit the maximum amount of overlap in some situations. You would also get cache invalidation bus traffic.
|
1067 |
+
|
1068 |
+
I could probably trim 30% of my data by going to a byte level encoding of all the function calls, instead of the explicit function pointer / parameter count / all-parms-are-32-bits that I have now, but half of the data is just raw vertex data, which isn't going to shrink unless I did evil things like quantize floats to shorts.
|
1069 |
+
|
1070 |
+
Too much effort for what looks like a reletively minor speedup. I'm giving up on this aproach, and going back to explicit threading in the renderer so I can make most of the communicated data implicit.
|
1071 |
+
|
1072 |
+
Oh well. It was amusing work, and I learned a few things along the way.
|
1073 |
+
|
1074 |
+
|
1075 |
+
-----------------------------------------
|
1076 |
+
John Carmack's .plan for Oct 14, 1998
|
1077 |
+
-----------------------------------------
|
1078 |
+
|
1079 |
+
It has been difficult to write .plan updates lately. Every time I start writing something, I realize that I'm not going to be able to cover it satisfactorily in the time I can spend on it. I have found that terse little comments either get misinterpreted, or I get deluged by email from people wanting me to expand upon it.
|
1080 |
+
|
1081 |
+
I wanted to do a .plan about my evolving thoughts on code quality and lessons learned through quake and quake 2, but in the interest of actually completing an update, I decided to focus on one change that was intended to just clean things up, but had a surprising number of positive side effects.
|
1082 |
+
|
1083 |
+
Since DOOM, our games have been defined with portability in mind. Porting to a new platform involves having a way to display output, and having the platform tell you about the various relevant inputs. There are four principle inputs to a game: keystrokes, mouse moves, network packets, and time. (If you don't consider time an input value, think about it until you do - it is an important concept)
|
1084 |
+
|
1085 |
+
These inputs were taken in separate places, as seemed logical at the time. A function named Sys_SendKeyEvents() was called once a frame that would rummage through whatever it needed to on a system level, and call back into game functions like Key_Event( key, down ) and IN_MouseMoved( dx, dy ). The network system dropped into system specific code to check for the arrival of packets. Calls to Sys_Milliseconds() were littered all over the code for various reasons.
|
1086 |
+
|
1087 |
+
I felt that I had slipped a bit on the portability front with Q2 because I had been developing natively on windows NT instead of cross developing from NEXTSTEP, so I was reevaluating all of the system interfaces for Q3.
|
1088 |
+
|
1089 |
+
I settled on combining all forms of input into a single system event queue, similar to the windows message queue. My original intention was to just rigorously define where certain functions were called and cut down the number of required system entry points, but it turned out to have much stronger benefits.
|
1090 |
+
|
1091 |
+
With all events coming through one point (The return values from system calls, including the filesystem contents, are "hidden" inputs that I make no attempt at capturing, ), it was easy to set up a journalling system that recorded everything the game received. This is very different than demo recording, which just simulates a network level connection and lets time move at its own rate. Realtime applications have a number of unique development difficulties because of the interaction of time with inputs and outputs.
|
1092 |
+
|
1093 |
+
Transient flaw debugging. If a bug can be reproduced, it can be fixed. The nasty bugs are the ones that only happen every once in a while after playing randomly, like occasionally getting stuck on a corner. Often when you break in and investigate it, you find that something important happened the frame before the event, and you have no way of backing up. Even worse are realtime smoothness issues - was that jerk of his arm a bad animation frame, a network interpolation error, or my imagination?
|
1094 |
+
|
1095 |
+
Accurate profiling. Using an intrusive profiler on Q2 doesn't give accurate results because of the realtime nature of the simulation. If the program is running half as fast as normal due to the instrumentation, it has to do twice as much server simulation as it would if it wasn't instrumented, which also goes slower, which compounds the problem. Aggressive instrumentation can slow it down to the point of being completely unplayable.
|
1096 |
+
|
1097 |
+
Realistic bounds checker runs. Bounds checker is a great tool, but you just can't interact with a game built for final checking, its just waaaaay too slow. You can let a demo loop play back overnight, but that doesn't exercise any of the server or networking code.
|
1098 |
+
|
1099 |
+
The key point: Journaling of time along with other inputs turns a realtime application into a batch process, with all the attendant benefits for quality control and debugging. These problems, and many more, just go away. With a full input trace, you can accurately restart the session and play back to any point (conditional breakpoint on a frame number), or let a session play back at an arbitrarily degraded speed, but cover exactly the same code paths..
|
1100 |
+
|
1101 |
+
I'm sure lots of people realize that immediately, but it only truly sunk in for me recently. In thinking back over the years, I can see myself feeling around the problem, implementing partial journaling of network packets, and included the "fixedtime" cvar to eliminate most timing reproducibility issues, but I never hit on the proper global solution. I had always associated journaling with turning an interactive application into a batch application, but I never considered the small modification necessary to make it applicable to a realtime application.
|
1102 |
+
|
1103 |
+
In fact, I was probably blinded to the obvious because of one of my very first successes: one of the important technical achievements of Commander Keen 1 was that, unlike most games of the day, it adapted its play rate based on the frame speed (remember all those old games that got unplayable when you got a faster computer?). I had just resigned myself to the non-deterministic timing of frames that resulted from adaptive simulation rates, and that probably influenced my perspective on it all the way until this project.
|
1104 |
+
|
1105 |
+
Its nice to see a problem clearly in its entirety for the first time, and know exactly how to address it.
|
1106 |
+
|
1107 |
+
|
1108 |
+
-----------------------------------------
|
1109 |
+
John Carmack's .plan for Nov 03, 1998
|
1110 |
+
-----------------------------------------
|
1111 |
+
|
1112 |
+
This was the most significant thing I talked about at The Frag, so here it is for everyone else.
|
1113 |
+
|
1114 |
+
The way the QA game architecture has been developed so far has been as two seperate binary dll's: one for the server side game logic, and one for the client side presentation logic.
|
1115 |
+
|
1116 |
+
While it was easiest to begin development like that, there are two crucial problems with shipping the game that way: security and portability.
|
1117 |
+
|
1118 |
+
It's one thing to ask the people who run dedicated servers to make informed decisions about the safety of a given mod, but its a completely different matter to auto-download a binary image to a first time user connecting to a server they found.
|
1119 |
+
|
1120 |
+
The quake 2 server crashing attacks have certainly proven that there are hackers that enjoy attacking games, and shipping around binary code would be a very tempting opening for them to do some very nasty things.
|
1121 |
+
|
1122 |
+
With quake and Quake 2, all game modifications were strictly server side, so any port of the game could connect to any server without problems. With Quake 2's binary server dll's not all ports could necessarily run a server, but they could all play.
|
1123 |
+
|
1124 |
+
With significant chunks of code now running on the client side, if we stuck with binary dll's then the less popular systems would find that they could not connect to new servers because the mod code hadn't been ported. I considered having things set up in such a way that client game dll's could be sort of forwards-compatable, where they could always connect and play, but new commands and entity types just might now show up. We could also GPL the game code to force mod authors to release source with the binaries, but that would still be inconvenient to deal with all the porting.
|
1125 |
+
|
1126 |
+
Related both issues is client side cheating. Certain cheats are easy to do if you can hack the code, so the server will need to verify which code the client is running. With multiple ported versions, it wouldn't be possible to do any binary verification.
|
1127 |
+
|
1128 |
+
If we were willing to wed ourselves completely to the windows platform, we might have pushed ahead with some attempt at binary verification of dlls, but I ruled that option out. I want QuakeArena running on every platform that has hardware accelerated OpenGL and an internet connection.
|
1129 |
+
|
1130 |
+
The only real solution to these problems is to use an interpreted language like Quake 1 did. I have reached the conclusion that the benefits of a standard language outweigh the benefits of a custom language for our purposes. I would not go back and extend QC, because that stretches the effort from simply system and interpreter design to include language design, and there is already plenty to do.
|
1131 |
+
|
1132 |
+
I had been working under the assumption that Java was the right way to go, but recently I reached a better conclusion.
|
1133 |
+
|
1134 |
+
The programming language for QuakeArena mods is interpreted ANSI C. (well, I am dropping the double data type, but otherwise it should be pretty conformant)
|
1135 |
+
|
1136 |
+
The game will have an interpreter for a virtual RISC-like CPU. This should have a minor speed benefit over a byte-coded, stack based java interpreter. Loads and stores are confined to a preset block of memory, and access to all external system facilities is done with system traps to the main game code, so it is completely secure.
|
1137 |
+
|
1138 |
+
The tools necessary for building mods will all be freely available: a modified version of LCC and a new program called q3asm. LCC is a wonderful project - a cross platform, cross compiling ANSI C compiler done in under 20K lines of code. Anyone interested in compilers should pick up a copy of "A retargetable C compiler: design and implementation" by Fraser and Hanson.
|
1139 |
+
|
1140 |
+
You can't link against any libraries, so every function must be resolved. Things like strcmp, memcpy, rand, etc. must all be implemented directly. I have code for all the ones I use, but some people may have to modify their coding styles or provide implementations for other functions.
|
1141 |
+
|
1142 |
+
It is a fair amount of work to restructure all the interfaces to not share pointers between the system and the games, but it is a whole lot easier than porting everything to a new language. The client game code is about 10k lines, and the server game code is about 20k lines.
|
1143 |
+
|
1144 |
+
The drawback is performance. It will probably perform somewhat like QC. Most of the heavy lifting is still done in the builtin functions for path tracing and world sampling, but you could still hurt yourself by looping over tons of objects every frame. Yes, this does mean more load on servers, but I am making some improvements in other parts that I hope will balance things to about the way Q2 was on previous generation hardware.
|
1145 |
+
|
1146 |
+
There is also the amusing avenue of writing hand tuned virtual assembly assembly language for critical functions..
|
1147 |
+
|
1148 |
+
I think this is The Right Thing.
|
1149 |
+
|
1150 |
+
|
1151 |
+
-----------------------------------------
|
1152 |
+
John Carmack's .plan for Nov 04, 1998
|
1153 |
+
-----------------------------------------
|
1154 |
+
|
1155 |
+
More extensive comments on the interpreted-C decision later, but a quick note: the plan is to still allow binary dll loading so debuggers can be used, but it should be interchangable with the interpreted code. Client modules can only be debugged if the server is set to allow cheating, but it would be possible to just use the binary interface for server modules if you wanted to sacrifice portability. Most mods will be able to be implemented with just the interpreter, but some mods that want to do extensive file access or out of band network communications could still be implemented just as they are in Q2. I will not endorse any use of binary client modules, though.
|
1156 |
+
|
1157 |
+
|
1158 |
+
-----------------------------------------
|
1159 |
+
John Carmack's .plan for Dec 29, 1998
|
1160 |
+
-----------------------------------------
|
1161 |
+
|
1162 |
+
I am considering taking a shortcut with my virtual machine implementation that would make the integration a bit easier, but I'm not sure that it doesn't compromise the integrity of the base system.
|
1163 |
+
|
1164 |
+
I am considering allowing the interpreted code to live in the global address space, instead of a private 0 based address space of its own. Store instructions from the VM would be confined to the interpreter's address space, but loads could access any structures.
|
1165 |
+
|
1166 |
+
On the positive side:
|
1167 |
+
|
1168 |
+
This would allow full speed (well, full interpreted speed) access to variables shared between the main code and the interpreted modules. This allows system calls to return pointers, instead of filling in allocated space in the interpreter's address space.
|
1169 |
+
|
1170 |
+
For most things, this is just a convenience that will cut some development time. Most of the shared accesses could be recast as "get" system calls, and it is certainly arguable that that would be a more robust programming style.
|
1171 |
+
|
1172 |
+
The most prevelent change this would prevent is all the cvar_t uses. Things could stay in the same style as Q2, where cvar accesses are free and transparantly updated. If the interpreter lives only in its own address space, then cvar access would have to be like Q1, where looking up a variable is a potentially time consuming operation, and you wind up adding lots of little cvar caches that are updated every from or restart.
|
1173 |
+
|
1174 |
+
On the negative side:
|
1175 |
+
|
1176 |
+
A client game module with a bug could cause a bus error, which would not be possible with a pure local address space interpreter.
|
1177 |
+
|
1178 |
+
I can't think of any exploitable security problems that read only access to the entire address space opens, but if anyone thinks of something, let me know.
|
1179 |
+
|
1180 |
+
|
1181 |
+
-----------------------------------------
|
1182 |
+
John Carmack's .plan for Dec 30, 1998
|
1183 |
+
-----------------------------------------
|
1184 |
+
|
1185 |
+
I got several vague comments about being able to read "stuff" from shared memory, but no concrete examples of security problems.
|
1186 |
+
|
1187 |
+
However, Gregory Maxwell pointed out that it wouldn't work cross platform with 64 bit pointer environments like linux alpha. That is a killer, so I will be forced to do everything the hard way. Its probably for the best, from a design standpoint anyway, but it will take a little more effort.
|
1188 |
+
|
1189 |
+
|
johnc_plan_1999.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
johnc_plan_2000.txt
ADDED
@@ -0,0 +1,342 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for Feb 23, 2000
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
This is a public statement that is also being sent directly to Slade at QuakeLives regarding http://www.quakelives.com/main/ql.cgi?section=dlagreement&file=qwcl-win32/
|
6 |
+
|
7 |
+
I see both sides of this. Your goals are positive, and I understand the issues and the difficulties that your project has to work under because of the GPL. I have also seen some GPL zealots acting petty and immature towards you very early on (while it is within everyone's rights to DEMAND code under the GPL, it isn't necessarily the best attitude to take), which probably colors some of your views on the subject.
|
8 |
+
|
9 |
+
We discussed several possible legal solutions to the issues.
|
10 |
+
|
11 |
+
This isn't one of them.
|
12 |
+
|
13 |
+
While I doubt your "give up your rights" click through would hold up in court, I am positive that you are required to give the source to anyone that asks for it that got a binary from someone else. This doesn't provide the obscurity needed for a gaming level of security.
|
14 |
+
|
15 |
+
I cut you a lot of slack because I honestly thought you intended to properly follow through with the requirements of the GPL, and you were just trying to get something fun out ASAP. It looks like I was wrong.
|
16 |
+
|
17 |
+
If you can't stand to work under the GPL, you should release the code to your last binary and give up your project. I would prefer that you continue your work, but abide by the GPL.
|
18 |
+
|
19 |
+
If necessary, I will pay whatever lawyer the Free Software Foundation reccomends to pursue this.
|
20 |
+
|
21 |
+
|
22 |
+
-----------------------------------------
|
23 |
+
John Carmack's .plan for Feb 24, 2000
|
24 |
+
-----------------------------------------
|
25 |
+
|
26 |
+
Some people took it upon themselves to remotely wreck Slade's development system. That is no more defensible than breaking into Id and smashing something.
|
27 |
+
|
28 |
+
The idea isn't to punish anyone, it is to have them comply with the license and continue to contribute. QuakeLives has quite a few happy users, and it is in everyone's best interest to have development continue. It just has to be by the rules.
|
29 |
+
|
30 |
+
|
31 |
+
-----------------------------------------
|
32 |
+
John Carmack's .plan for Mar 07, 2000
|
33 |
+
-----------------------------------------
|
34 |
+
|
35 |
+
Virtualized video card local memory is The Right Thing.
|
36 |
+
|
37 |
+
This is something I have been preaching for a couple years, but I finally got around to setting all the issues down in writing.
|
38 |
+
|
39 |
+
Now, the argument (and a whole bunch of tertiary information):
|
40 |
+
|
41 |
+
If you had all the texture density in the world, how much texture memory would be needed on each frame?
|
42 |
+
|
43 |
+
For directly viewed textures, mip mapping keeps the amount of referenced texels between one and one quarter of the drawn pixels. When anisotropic viewing angles and upper level clamping are taken into account, the number gets smaller. Take 1/3 as a conservative estimate.
|
44 |
+
|
45 |
+
Given a fairly aggressive six texture passes over the entire screen, that equates to needing twice as many texels as pixels. At 1024x768 resolution, well under two million texels will be referenced, no matter what the finest level of detail is. This is the worst case, assuming completely unique texturing with no repeating. More commonly, less than one million texels are actually needed.
|
46 |
+
|
47 |
+
As anyone who has tried to run certain Quake 3 levels in high quality texture mode on an eight or sixteen meg card knows, it doesn't work out that way in practice. There is a fixable part and some more fundamental parts to the fall-over-dead-with-too-many-textures problem.
|
48 |
+
|
49 |
+
The fixable part is that almost all drivers perform pure LRU (least recently used) memory management. This works correctly as long as the total amount of textures needed for a given frame fits in the card's memory after they have been loaded. As soon as you need a tiny bit more memory than fits on the card, you fall off of a performance cliff. If you need 14 megs of textures to render a frame, and your graphics card has 12 megs available after its frame buffers, you wind up loading 14 megs of texture data over the bus every frame, instead of just the 2 megs that don't fit. Having the cpu generate 14 megs of command traffic can drop you way into the single digit frame rates on most drivers.
|
50 |
+
|
51 |
+
If an application makes reasonable effort to group rendering by texture, and there is some degree of coherence in the order of texture references between frames, much better performance can be gotten with a swapping algorithm that changes its behavior instead of going into a full thrash:
|
52 |
+
|
53 |
+
[code]
|
54 |
+
While ( memory allocation for new texture fails )
|
55 |
+
Find the least recently used texture.
|
56 |
+
If the LRU texture was not needed in the previous frame,
|
57 |
+
Free it
|
58 |
+
Else
|
59 |
+
Free the most recently used texture that isn't bound to an active texture unit
|
60 |
+
[/code]
|
61 |
+
|
62 |
+
Freeing the MRU texture seems counterintuitive, but what it does is cause the driver to use the last bit of memory as a sort of scratchpad that gets constantly overwritten when there isn't enough space. Pure LRU plows over all the other textures that are very likely going to be needed at the beginning of the next frame, which will then plow over all the textures that were loaded on top of them.
|
63 |
+
|
64 |
+
If an application uses textures in a completely random order, any given replacement policy has the some effect..
|
65 |
+
|
66 |
+
Texture priority for swapping is a non-feature. There is NO benefit to attempting to statically prioritize textures for swapping. Either a texture is going to be referenced in the next frame, or it isn't. There aren't any useful gradations in between. The only hint that would be useful would be a notice that a given texture is not going to be in the next frame, and that just doesn't come up very often or cover very many texels.
|
67 |
+
|
68 |
+
With the MRU-on-thrash texture swapping policy, things degrade gracefully as the total amount of textures increase but due to several issues, the total amount of textures calculated and swapped is far larger than the actual amount of texels referenced to draw pixels.
|
69 |
+
|
70 |
+
The primary problem is that textures are loaded as a complete unit, from the smallest mip map level all the way up to potentially a 2048 by 2048 top level image. Even if you are only seeing 16 pixels of it off in the distance, the entire 12 meg stack might need to be loaded.
|
71 |
+
|
72 |
+
Packing can also cause some amount of wasted texture memory. When you want to load a two meg texture, it is likely going to require a lot more than just two megs of free texture memory, because a lot of it is going to be scattered around in 8k to 64k blocks. At the pathological limit, this can waste half your texture memory, but more reasonably it is only going to be 10% or so, and cause a few extra texture swap outs.
|
73 |
+
|
74 |
+
On a frame at a time basis, there are often significant amounts of texels even in referenced mip levels that are not seen. The back sides of characters, and large textures on floors can often have less than 50% of their texels used during a frame. This is only an issue as they are being swapped in, because they will very likely be needed within the next few frames. The result is one big hitch instead of a steady loading.
|
75 |
+
|
76 |
+
There are schemes that can help with these problems, but they have costs.
|
77 |
+
|
78 |
+
Packing losses can be addressed with compaction, but that has rarely proven to be worthwhile in the history of memory management. A 128-bit graphics accelerator could compact and sort 10 megs of texture memory in about 10 msec if desired.
|
79 |
+
|
80 |
+
The problems with large textures can be solved by just not using large textures. Both packing losses, and non- referenced texels can be reduced by chopping everything up into 64x64 or 128x128 textures. This requires preprocessing, adds geometry, and requires messy overlap of the textures to avoid seaming problems.
|
81 |
+
|
82 |
+
It is possible to estimate which mip levels will actually be needed and only swap those in. An application can't calculate exactly the mip map levels that will be referenced by the hardware, because there are slight variations between chips and the slope calculation would add significant processing overhead. A conservative upper bound can be taken by looking at the minimum normal distance of any vertex referencing a given texture in a frame. This will overestimate the required textures by 2x or so and still leave a big hit when the top mip level loads for big textures, but it can allow giant cathedral style scenes to render without swapping.
|
83 |
+
|
84 |
+
Clever programmers can always work harder to overcome obstacles, but in this case, there is a clear hardware solution that gives better performance than anything possible with software and just makes everyone's lives easier: virtualize the card's view of its local memory.
|
85 |
+
|
86 |
+
With page tables, address fragmentation isn't an issue, and with the graphics rasterizer only causing a page load when something from that exact 4k block is needed, the mip level problems and hidden texture problems just go away. Nothing sneaky has to be done by the application or driver, you just manage page indexes.
|
87 |
+
|
88 |
+
The hardware requirements are not very heavy. You need translation lookaside buffers (TLB) on the graphics chip, the ability to automatically load the TLB from a page table set up in local memory, and the ability to move a page from AGP or PCI into graphics memory and update the page tables and reference counts. You don't even need that many TLB, because graphics access patterns don't hop all over the place like CPU access can. Even with only a single TLB for each texture bilerp unit, reloads would only account for about 1/32 of the memory access if the textures were 4k blocked. All you would really want at the upper limit would be enough TLB for each texture unit to cover the texels referenced on a typical rasterization scan line.
|
89 |
+
|
90 |
+
Some programmers will say "I don't want the system to manage the textures, I want full control!" There are a couple responses to that. First, a page level management scheme has flexibility that you just can't get with a software only scheme, so it is a set of brand new capabilities. Second, you can still just choose to treat it as a fixed size texture buffer and manage everything yourself with updates. Third, even if it WAS slower than the craftiest possible software scheme (and I seriously doubt it), so much of development is about willingly trading theoretical efficiency for quicker, more robust development. We don't code overlays in assembly language any more..
|
91 |
+
|
92 |
+
Some hardware designers will say something along the lines of "But the graphics engine goes idle when you are pulling the page over from AGP!" Sure, you are always better off to just have enough texture memory and never swap, and this feature wouldn't let you claim any more megapixels or megatris, but every card winds up not having enough memory at some point. Ignoring those real world cases isn't helping your customers. In any case, it goes idle a hell of a lot less than if you were loading the entire texture over the command fifo.
|
93 |
+
|
94 |
+
3Dlabs is supposed to have some form of virtual memory management in the permedia 3, but I am not familiar with the details (if anyone from 3dlabs wants to send me the latest register specs, I would appreciate it!).
|
95 |
+
|
96 |
+
A mouse controlled first person shooter is fairly unique in how quickly it can change the texture composition of a scene. A 180-degree snap turn can conceivably bring in a completely different set of textures on a subsequent frame. Almost all other graphics applications bring textures in at a much steadier pace.
|
97 |
+
|
98 |
+
So, given that 180-degree snap turn to a completely different and uniquely textured scene, what would be the worst case performance? An AGP 2x bus is theoretically supposed to have over 500 mb/sec of bandwidth. It doesn't get that high in practice, but linear 4k block reads would give it the best possible conditions, and even at 300 mb/sec, reloading the entire texture working set would only take 10 msec.
|
99 |
+
|
100 |
+
Rendering is not likely to be buffered sufficiently to overlap appreciably with page loading, and the command transport for a complex scene will take significant time by itself, so it shows that a worst case scene will often not be able to be rendered in 1/60th of a second.
|
101 |
+
|
102 |
+
This is roughly the same lower bound that you get from a chip texturing directly from AGP memory. A direct AGP texture gains the benefit of fine-grained rendering overlap, but loses the benefit of subsequent references being in faster memory (outside of small on-chip caches). A direct AGP texture engine doesn't have the higher upper bounds of a cached texture engine, though. It's best and worst case are similar (generally a good thing), but the cached system can bring several times more bandwidth to bear when it isn't forced to swap anything in.
|
103 |
+
|
104 |
+
The important point is that the lower performance bound is almost an order of magnitude faster than swapping in the textures as a unit by the driver.
|
105 |
+
|
106 |
+
If you just positively couldn't deal with the chance of that much worst case delay, some form of mip level biasing could be made to kick in, or you could try and do pre-touching, but I don't think it would ever be worth it. The worst imaginable case is acceptable, and you just won't hit that case very often.
|
107 |
+
|
108 |
+
Unless a truly large number of TLB are provided, the textures would need to be blocked. The reason is that with a linear texture, a 4k page maps to only a couple scan lines on very large textures. If you are going with the grain you get great reuse, but if you go across it, you wind up referencing a new page every couple texel accesses. What is wanted is an addressing mechanism that converts a 4k page into a square area in the texture, so the page access is roughly constant for all orientations. There is also a benefit from having a 128 bit access also map to a square block of pixels, which several existing cards already do. The same interleaving-of-low-order-bits approach can just be extended a few more bits.
|
109 |
+
|
110 |
+
Dealing with blocked texture patterns is a hassle for a driver writer, but most graphics chips have a host blit capability that should let the chip deal with changing a linear blit into blocked writes. Application developers should never know about it, in any case.
|
111 |
+
|
112 |
+
There are some other interesting things that could be done if the page tables could trigger a cpu interrupt in addition to being automatically backed by AGP or PCI memory. Textures could be paged in directly from disk for truly huge settings, or decompressed from jpeg blocks, or even procedurally generated. Even the size limits of the AGP aperture could usefully be avoided if the driver wanted to manage each page's allocation.
|
113 |
+
|
114 |
+
Aside from all the basic swapping issue, there are a couple of other hardware trends that push things this way.
|
115 |
+
|
116 |
+
Embedded dram should be a driving force. It is possible to put several megs of extremely high bandwidth dram on a chip or die with a video controller, but won't be possible (for a while) to cram a 64 meg geforce in. With virtualized texturing, the major pressure on memory is drastically reduced. Even an 8mb card would be sufficient for 16 bit 1024x768 or 32 bit 800x600 gaming, no matter what the texture load.
|
117 |
+
|
118 |
+
The only thing that prevents a geometry processor based card from turning almost any set of commands in a display list into a single static dma buffer is the fact that textures may be swapped in and out, causing the register programming in the buffer to be wrong. With virtual texture addressing, a texture's address never changes, and an arbitrarily complex model can be described in a static dma buffer.
|
119 |
+
|
120 |
+
|
121 |
+
-----------------------------------------
|
122 |
+
John Carmack's .plan for Mar 27, 2000
|
123 |
+
-----------------------------------------
|
124 |
+
|
125 |
+
Seumas McNally
|
126 |
+
|
127 |
+
Two years ago, Id was contacted by the Startlight Foundation, an organization that tries to grant wishes to seriously ill kids. (www.starlight.org)
|
128 |
+
|
129 |
+
There was a young man with Hodgkin's Lymphoma that, instead of wanting to go to Disneyland or other traditional wishes, wanted to visit Id and talk with me about programming.
|
130 |
+
|
131 |
+
It turned out that Seumas McNally was already an accomplished developer. His family company, Longbow Digital Arts (www.longbowdigitalarts.com), had been doing quite respectably selling small games directly over the internet. It bore a strong resemblance to the early shareware days of Apogee and Id.
|
132 |
+
|
133 |
+
We spent the evening talking about graphics programmer things - the relative merits of voxels and triangles, procedurally generated media, level of detail management, API and platforms.
|
134 |
+
|
135 |
+
We talked at length about the balance between technology and design, and all the pitfalls that lie in the way of shipping a modern product.
|
136 |
+
|
137 |
+
We also took a dash out in my ferrari, thinking "this is going to be the best excuse a cop will ever hear if we get pulled over".
|
138 |
+
|
139 |
+
Longbow continued to be successful, and eventually the entire family was working full time on "Treadmarks", their new 3D tank game.
|
140 |
+
|
141 |
+
Over email about finishing the technology in Treadmarks, Seumas once said "I hope I can make it". Not "be a huge success" or "beat the competition". Just "make it".
|
142 |
+
|
143 |
+
That is a yardstick to measure oneself by.
|
144 |
+
|
145 |
+
It is all too easy to lose your focus or give up with just the ordinary distractions and disappointments that life brings. This wasn't ordinary. Seumas had cancer. Whatever problems you may be dealing with in your life, they pale before having problems drawing your next breath.
|
146 |
+
|
147 |
+
He made it.
|
148 |
+
|
149 |
+
Treadmarks started shipping a couple months ago, and was entered in the Independent Games Festival at the Game Developer's Conference this last month. It came away with the awards for technical excellence, game design, and the grand prize.
|
150 |
+
|
151 |
+
I went out to dinner with the McNally family the next day, and had the opportunity to introduce Anna to them. One of the projects at Anna's new company, Fountainhead Entertainment (www.fountainheadent.com), is a documentary covering gaming, and she had been looking forward to meeting Seumas after hearing me tell his story a few times. The McNallys invited her to bring a film crew up to Canada and talk with everyone whenever she could.
|
152 |
+
|
153 |
+
Seumas died the next week.
|
154 |
+
|
155 |
+
I am proud to have been considered an influence in Seumas' work, and I think his story should be a good example for others. Through talent and determination, he took something he loved and made a success out of it in many dimensions.
|
156 |
+
|
157 |
+
http://www.gamedev.net/community/memorial/seumas/ for more information.
|
158 |
+
|
159 |
+
|
160 |
+
-----------------------------------------
|
161 |
+
John Carmack's .plan for Apr 06, 2000
|
162 |
+
-----------------------------------------
|
163 |
+
|
164 |
+
Whenever I start a new graphics engine, I always spend a fair amount of time flipping back through older graphics books. It is always interesting to see how your changed perspective with new experience impacts your appreciation of a given article.
|
165 |
+
|
166 |
+
I was skimming through Jim Blinn's "A Trip Down The Graphics Pipeline" tonight, and I wound up laughing out loud twice.
|
167 |
+
|
168 |
+
From the book:
|
169 |
+
|
170 |
+
P73: I then empirically found that I had to scale by -1 in x instead of in z, and also to scale the xa and xf values by -1. (Basically I just put in enough minus signs after the fact to make it work.) Al Barr refers to this technique as "making sure you have made an even number of sign errors."
|
171 |
+
|
172 |
+
P131: The only lines that generate w=0 after clipping are those that pass through the z axis, the valley of the trough. These lines are lines that pass exactly through the eyepoint. After which you are dead and don't care about divide-by-zero errors.
|
173 |
+
|
174 |
+
If you laughed, you are a graphics geek.
|
175 |
+
|
176 |
+
My first recollection of a Jim Blinn article many years ago was my skimming over it and thinking "My god, what ridiculously picky minutia." Over the last couple years, I found myself haranguing people over some fairly picky issues, like the LSB errors with cpu vs rasterizer face culling and screen edge clipping with guard band bit tests. After one of those pitches, I quite distinctly thought to myself "My god, I'm turning into Jim Blinn!" :)
|
177 |
+
|
178 |
+
|
179 |
+
-----------------------------------------
|
180 |
+
John Carmack's .plan for Apr 29, 2000
|
181 |
+
-----------------------------------------
|
182 |
+
|
183 |
+
We need more bits per color component in our 3D accelerators.
|
184 |
+
|
185 |
+
I have been pushing for a couple more bits of range for several years now, but I now extend that to wanting full 16 bit floating point colors throughout the graphics pipeline. A sign bit, ten bits of mantissa, and five bits of exponent (possibly trading a bit or two between the mantissa and exponent). Even that isn't all you could want, but it is the rational step.
|
186 |
+
|
187 |
+
It is turning out that I need a destination alpha channel for a lot of the new rendering algorithms, so intermediate solutions like 10/12/10 RGB formats aren't a good idea. Higher internal precision with dithering to 32 bit pixels would have some benefit, but dithered intermediate results can easily start piling up the errors when passed over many times, as we have seen with 5/6/5 rendering.
|
188 |
+
|
189 |
+
Eight bits of precision isn't enough even for full range static image display. Images with a wide range usually come out fine, but restricted range images can easily show banding on a 24-bit display. Digital television specifies 10 bits of precision, and many printing operations are performed with 12 bits of precision.
|
190 |
+
|
191 |
+
The situation becomes much worse when you consider the losses after multiple operations. As a trivial case, consider having multiple lights on a wall, with their contribution to a pixel determined by a texture lookup. A single light will fall off towards 0 some distance away, and if it covers a large area, it will have visible bands as the light adds one unit, two units, etc. Each additional light from the same relative distance stacks its contribution on top of the earlier ones, which magnifies the amount of the step between bands: instead of going 0,1,2, it goes 0,2,4, etc. Pile a few lights up like this and look towards the dimmer area of the falloff, and you can believe you are back in 256-color land.
|
192 |
+
|
193 |
+
There are other more subtle issues, like the loss of potential result values from repeated squarings of input values, and clamping issues when you sum up multiple incident lights before modulating down by a material.
|
194 |
+
|
195 |
+
Range is even more clear cut. There are some values that have intrinsic ranges of 0.0 to 1.0, like factors of reflection and filtering. Normalized vectors have a range of -1.0 to 1.0. However, the most central quantity in rendering, light, is completely unbounded. We want a LOT more than a 0.0 to 1.0 range. Q3 hacks the gamma tables to sacrifice a bit of precision to get a 0.0 to 2.0 range, but I wanted more than that for even primitive rendering techniques. To accurately model the full human sensable range of light values, you would need more than even a five bit exponent.
|
196 |
+
|
197 |
+
This wasn't much of an issue even a year ago, when we were happy to just cover the screen a couple times at a high framerate, but realtime graphics is moving away from just "putting up wallpaper" to calculating complex illumination equations at each pixel. It is not at all unreasonable to consider having twenty textures contribute to the final value of a pixel. Range and precision matter.
|
198 |
+
|
199 |
+
A few common responses to this pitch:
|
200 |
+
|
201 |
+
"64 bits per pixel??? Are you crazy???" Remember, it is exactly the same relative step as we made from 16 bit to 32 bit, which didn't take all that long.
|
202 |
+
|
203 |
+
Yes, it will be slower. That's ok. This is an important point: we can't continue to usefully use vastly greater fill rate without an increase in precision. You can always crank the resolution and multisample anti-alaising up higher, but that starts to have diminishing returns well before you use of the couple gigatexels of fill rate we are expected to have next year. The cool and interesting things to do with all that fill rate involves many passes composited into less pixels, making precision important.
|
204 |
+
|
205 |
+
"Can we just put it in the texture combiners and leave the framebuffer at 32 bits?" No. There are always going to be shade trees that overflow a given number of texture units, and they are going to be the ones that need the extra precision. Scales and biases between the framebuffer and the higher precision internal calculations can get you some mileage (assuming you can bring the blend color into your combiners, which current cards can't), but its still not what you want. There are also passes which fundamentally aren't part of a single surface, but still combine to the same pixels, as with all forms of translucency, and many atmospheric effects.
|
206 |
+
|
207 |
+
"Do we need it in textures as well?" Not for most image textures, but it still needs to be supported for textures that are used as function look up tables.
|
208 |
+
|
209 |
+
"Do we need it in the front buffer?" Probably not. Going to a 64 bit front buffer would probably play hell with all sorts of other parts of the system. It is probably reasonable to stay with 32 bit front buffers with a blit from the 64 bit back buffer performing a lookup or scale and bias operation before dithering down to 32 bit. Dynamic light adaptation can also be done during this copy. Dithering can work quite well as long as you are only performing a single pass.
|
210 |
+
|
211 |
+
I used to be pitching this in an abstract "you probably should be doing this" form, but two significant things have happened that have moved this up my hit list to something that I am fairly positive about.
|
212 |
+
|
213 |
+
Mark Peercy of SGI has shown, quite surprisingly, that all Renderman surface shaders can be decomposed into multi-pass graphics operations if two extensions are provided over basic OpenGL: the existing pixel texture extension, which allows dependent texture lookups (matrox already supports a form of this, and most vendors will over the next year), and signed, floating point colors through the graphics pipeline. It also makes heavy use of the existing, but rarely optimized, copyTexSubImage2D functionality for temporaries.
|
214 |
+
|
215 |
+
This is a truly striking result. In retrospect, it seems obvious that with adds, multiplies, table lookups, and stencil tests that you can perform any computation, but most people were working under the assumption that there were fundamentally different limitations for "realtime" renderers vs offline renderers. It may take hundreds or thousands of passes, but it clearly defines an approach with no fundamental limits. This is very important. I am looking forward to his Siggraph paper this year.
|
216 |
+
|
217 |
+
Once I set down and started writing new renderers targeted at GeForce level performance, the precision issue has started to bite me personally. There are quite a few times where I have gotten visible banding after a set of passes, or have had to worry about ordering operations to avoid clamping. There is nothing like actually dealing with problems that were mostly theoretical before..
|
218 |
+
|
219 |
+
64 bit pixels. It is The Right Thing to do. Hardware vendors: don't you be the company that is the last to make the transition.
|
220 |
+
|
221 |
+
|
222 |
+
-----------------------------------------
|
223 |
+
John Carmack's .plan for May 08, 2000
|
224 |
+
-----------------------------------------
|
225 |
+
|
226 |
+
The .qc files for quake1/quakeworld are now available under the GPL in source/qw-qc.tar.gx on out ftp site. This was an oversight on my part in the original release.
|
227 |
+
|
228 |
+
Thanks to the QuakeForge team for doing the grunt work of the preparation.
|
229 |
+
|
230 |
+
|
231 |
+
-----------------------------------------
|
232 |
+
John Carmack's .plan for May 09, 2000
|
233 |
+
-----------------------------------------
|
234 |
+
|
235 |
+
And the Q1 utilities are now also available under the GPL in source/q1tools_gpl.tgz.
|
236 |
+
|
237 |
+
|
238 |
+
-----------------------------------------
|
239 |
+
John Carmack's .plan for May 14, 2000
|
240 |
+
-----------------------------------------
|
241 |
+
|
242 |
+
I stayed a couple days after E3 to attend the SORAC amateur rocket launch. I have provided some sponsorship to two of the teams competing for the CATS (Cheap Access to Space) rocketry prize, and it was a nice opportunity to get out and meet some of the people.
|
243 |
+
|
244 |
+
It is interesting how similar the activity is around an experimental rocket launch, going to a race track with an experimental car, and putting out a beta version of new software is. Lots of "twenty more minutes!", and lots of well-wishers waiting around while the people on the critical path sweat over what they are doing.
|
245 |
+
|
246 |
+
Mere minutes before we absolutely, positively needed to leave to catch our plane flight, they started the countdown. The rocket launched impressively, but broke apart at a relatively low altitude. Ouch. It was a hybrid, so there wasn't really an explosion, but watching the debris rain down wasn't very heartening. Times like that, I definitely appreciate working in software. "Run it again, with a breakpoint!"
|
247 |
+
|
248 |
+
Note to self: pasty-skinned programmers ought not stand out in the Mojave desert for multiple hours.
|
249 |
+
|
250 |
+
http://www.space-frontier.org/Events/CATSPRIZE_1/
|
251 |
+
http://www.energyrs.com/sorac/sorac.htm
|
252 |
+
http://www.jpaerospace.com/
|
253 |
+
|
254 |
+
|
255 |
+
-----------------------------------------
|
256 |
+
John Carmack's .plan for May 17, 2000
|
257 |
+
-----------------------------------------
|
258 |
+
|
259 |
+
I have gotten a lot of requests for comments on the latest crop of video cards, so here is my initial technical evaluation. We have played with some early versions, but this is a paper evaluation. I am not in a position to judge 2D GDI issues or TV/DVD issues, so this is just 3D commentary.
|
260 |
+
|
261 |
+
Nvidia Marketing silliness: saying "seven operations on a pixel" for a dual texture chip. Yes, I like NV_register_combiners a lot, but come on..
|
262 |
+
|
263 |
+
The DDR GeForce is the reigning champ of 3D cards. Of the shipping boards, it is basically better than everyone at every aspect of 3D graphics, and pioneered some features that are going to be very important: signed pixel math, dot product blending, and cubic environment maps.
|
264 |
+
|
265 |
+
The GeForce2 is just a speed bumped GeForce with a few tweaks, but that's not a bad thing. Nvidia will have far and away the tightest drivers for quite some time, and that often means more than a lot of new features in the real world.
|
266 |
+
|
267 |
+
The nvidia register combiners are highly programmable, and can often save a rendering pass or allow a somewhat higher quality calculation, but on the whole, I would take ATI's third texture for flexibility.
|
268 |
+
|
269 |
+
Nvidia will probably continue to hit the best framerates in benchmarks at low resolution, because they have flexible hardware with geometry acceleration and well-tuned drivers.
|
270 |
+
|
271 |
+
GeForce is my baseline for current rendering work, so I can wholeheartedly recommend it.
|
272 |
+
|
273 |
+
ATI Marketing silliness: "charisma engine" and "pixel tapestry" are silly names for vertex and pixel processing that are straightforward improvements over existing methods. Sony is probably to blame for starting that.
|
274 |
+
|
275 |
+
The Radeon has the best feature set available, with several advantages over GeForce:
|
276 |
+
|
277 |
+
A third texture unit per pixel
|
278 |
+
Three dimensional textures
|
279 |
+
Dependent texture reads (bump env map)
|
280 |
+
Greater internal color precision.
|
281 |
+
User clip planes orthogonal to all rasterization modes.
|
282 |
+
More powerful vertex blending operations.
|
283 |
+
The shadow id map support may be useful, but my work with shadow buffers have shown them to have significant limitations for global use in a game.
|
284 |
+
|
285 |
+
On paper, it is better than GeForce in almost every way except that it is limited to a maximum of two pixels per clock while GeForce can do four. This comes into play when the pixels don't do as much memory access, for example when just drawing shadow planes to the depth/stencil buffer, or when drawing in roughly front to back order and many of the later pixels depth fail, avoiding the color buffer writes.
|
286 |
+
|
287 |
+
Depending on the application and algorithm, this can be anywhere from basically no benefit when doing 32 bit blended multi-pass, dual texture rendering to nearly double the performance for 16 bit rendering with compressed textures. In any case, a similarly clocked GeForce(2) should somewhat outperform a Radeon on today's games when fill rate limited. Future games that do a significant number of rendering passes on the entire world may go back in ATI's favor if they can use the third texture unit, but I doubt it will be all that common.
|
288 |
+
|
289 |
+
The real issue is how quickly ATI can deliver fully clocked production boards, bring up stable drivers, and wring all the performance out of the hardware. This is a very different beast than the Rage128. I would definitely recommend waiting on some consumer reviews to check for teething problems before upgrading to a Radeon, but if things go well, ATI may give nvidia a serious run for their money this year.
|
290 |
+
|
291 |
+
3DFX Marketing silliness: Implying that a voodoo 5 is of a different class than a voodoo 4 isn't right. Voodoo 4 max / ultra / SLI / dual / quad or something would have been more forthright.
|
292 |
+
|
293 |
+
Rasterization feature wise, voodoo4 is just catching up to the original TNT. We finally have 32 bit color and stencil. Yeah.
|
294 |
+
|
295 |
+
There aren't any geometry features.
|
296 |
+
|
297 |
+
The T buffer is really nothing more than an accumulation buffer that is averaged together during video scanout. This same combining of separate buffers can be done by any modern graphics card if they are set up for it (although they will lose two bits of color precision in the process). At around 60 fps there is a slight performance win by doing it at video scannout time, but at 30 fps it is actually less memory traffic to do it explicitly. Video scan tricks also usually don't work in windowed modes.
|
298 |
+
|
299 |
+
The real unique feature of the voodoo5 is subpixel jittering during rasterization, which can't reasonably be emulated by other hardware. This does indeed improve the quality of anti-aliasing, although I think 3dfx might be pushing it a bit by saying their 4 sample jittering is as good as 16 sample unjittered.
|
300 |
+
|
301 |
+
The saving grace of the voodoo5 is the scalability. Because it only uses SDR ram, a dual chip Voodoo5 isn't all that much faster than some other single chip cards, but the quad chip card has over twice the pixel fill rate of the nearest competitor. That is a huge increment. Voodoo5 6000 should win every benchmark that becomes fill rate limited.
|
302 |
+
|
303 |
+
I haven't been able to honestly recommend a voodoo3 to people for a long time, unless they had a favorite glide game or wanted early linux Xfree 4.0 3D support. Now (well, soon), a Voodoo5 6000 should make all of today's games look better than any other card. You can get over twice as many pixel samples, and have them jittered and blended together for anti-aliasing.
|
304 |
+
|
305 |
+
It won't be able to hit Q3 frame rates as high as GeForce, but if you have a high end processor there really may not be all that much difference for you between 100fps and 80fps unless you are playing hardcore competitive and can't stand the occasional drop below 60fps.
|
306 |
+
|
307 |
+
There are two drawbacks: it's expensive, and it won't take advantage of the new rasterization features coming in future games. It probably wouldn't be wise to buy a voodoo5 if you plan on keeping it for two years.
|
308 |
+
|
309 |
+
|
310 |
+
-----------------------------------------
|
311 |
+
John Carmack's .plan for Jun 01, 2000
|
312 |
+
-----------------------------------------
|
313 |
+
|
314 |
+
Well, this is going to be an interesting .plan update.
|
315 |
+
|
316 |
+
Most of this is not really public business, but if some things aren't stated explicitly, it will reflect unfairly on someone.
|
317 |
+
|
318 |
+
As many people have heard discussed, there was quite a desire to remake DOOM as our next project after Q3. Discussing it brought an almost palpable thrill to most of the employees, but Adrian had a strong enough dislike for the idea that it was shot down over and over again.
|
319 |
+
|
320 |
+
Design work on an alternate game has been going on in parallel with the mission pack development and my research work.
|
321 |
+
|
322 |
+
Several factors, including a general lack of enthusiasm for the proposed plan, the warmth that Wolfenstien was met with at E3, and excitement about what we can do with the latest rendering technology were making it seem more and more like we weren't going down the right path.
|
323 |
+
|
324 |
+
I discussed it with some of the other guys, and we decided that it was important enough to drag the company through an unpleasant fight over it.
|
325 |
+
|
326 |
+
An ultimatum was issued to Kevin and Adrian(who control >50% of the company): We are working on DOOM for the next project unless you fire us.
|
327 |
+
|
328 |
+
Obviously no fun for anyone involved, but the project direction was changed, new hires have been expedited, and the design work has begun.
|
329 |
+
|
330 |
+
It wasn't planned to announce this soon, but here it is: We are working on a new DOOM game, focusing on the single player game experience, and using brand new technology in almost every aspect of it. That is all we are prepared to say about the game for quite some time, so don't push for interviews. We will talk about it when things are actually built, to avoid giving misleading comments.
|
331 |
+
|
332 |
+
It went smoother than expected, but the other shoe dropped yesterday.
|
333 |
+
|
334 |
+
Kevin and Adrian fired Paul Steed in retaliation, over my opposition.
|
335 |
+
|
336 |
+
Paul has certainly done things in the past that could be grounds for dismissal, but this was retaliatory for him being among the "conspirators".
|
337 |
+
|
338 |
+
I happen to think Paul was damn good at his job, and that he was going to be one of the most valuable contributors to DOOM.
|
339 |
+
|
340 |
+
We need to hire two new modeler/animator/cinematic director types. If you have a significant commercial track record in all three areas, and consider yourself at the top of your field, send your resume to Kevin Cloud.
|
341 |
+
|
342 |
+
|
johnc_plan_2001.txt
ADDED
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for Feb 22, 2001
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
GeForce 3 Overview
|
6 |
+
|
7 |
+
I just got back from Tokyo, where I demonstrated our new engine running under MacOS-X with a GeForce 3 card. We had quite a bit of discussion about whether we should be showing anything at all, considering how far away we are from having a title on the shelves, so we probably aren't going to be showing it anywhere else for quite a while.
|
8 |
+
|
9 |
+
We do run a bit better on a high end wintel system, but the Apple performance is still quite good, especially considering the short amount of time that the drivers had before the event.
|
10 |
+
|
11 |
+
It is still our intention to have a simultaneous release of the next product on Windows, MacOS-X, and Linux.
|
12 |
+
|
13 |
+
Here is a dump on the GeForce 3 that I have been seriously working with for a few weeks now:
|
14 |
+
|
15 |
+
The short answer is that the GeForce 3 is fantastic. I haven't had such an impression of raising the performance bar since the Voodoo 2 came out, and there are a ton of new features for programmers to play with.
|
16 |
+
|
17 |
+
Graphics programmers should run out and get one at the earliest possible time. For consumers, it will be a tougher call. There aren't any applications our right now that take proper advantage of it, but you should still be quite a bit faster at everything than GF2, especially with anti-aliasing. Balance that against whatever the price turns out to be.
|
18 |
+
|
19 |
+
While the Radeon is a good effort in many ways, it has enough shortfalls that I still generally call the GeForce 2 ultra the best card you can buy right now, so Nvidia is basically dethroning their own product.
|
20 |
+
|
21 |
+
It is somewhat unfortunate that it is labeled GeForce 3, because GeForce 2 was just a speed bump of GeForce, while GF3 is a major architectural change. I wish they had called the GF2 something else.
|
22 |
+
|
23 |
+
The things that are good about it:
|
24 |
+
|
25 |
+
Lots of values have additional internal precision, like texture coordinates and rasterization coordinates. There are only a few places where this matters, but it is nice to be cleaning up. Rasterization precision is about the last thing that the multi-thousand dollar workstation boards still do any better than the consumer cards.
|
26 |
+
|
27 |
+
Adding more texture units and more register combiners is an obvious evolutionary step.
|
28 |
+
|
29 |
+
An interesting technical aside: when I first changed something I was doing with five single or dual texture passes on a GF to something that only took two quad texture passes on a GF3, I got a surprisingly modest speedup. It turned out that the texture filtering and bandwidth was the dominant factor, not the frame buffer traffic that was saved with more texture units. When I turned off anisotropic filtering and used compressed textures, the GF3 version became twice as fast.
|
30 |
+
|
31 |
+
The 8x anisotropic filtering looks really nice, but it has a 30%+ speed cost. For existing games where you have speed to burn, it is probably a nice thing to force on, but it is a bit much for me to enable on the current project. Radeon supports 16x aniso at a smaller speed cost, but not in conjunction with trilinear, and something is broken in the chip that makes the filtering jump around with triangular rasterization dependencies.
|
32 |
+
|
33 |
+
The depth buffer optimizations are similar to what the Radeon provides, giving almost everything some measure of speedup, and larger ones available in some cases with some redesign.
|
34 |
+
|
35 |
+
3D textures are implemented with the full, complete generality. Radeon offers 3D textures, but without mip mapping and in a non-orthogonal manner (taking up two texture units).
|
36 |
+
|
37 |
+
Vertex programs are probably the most radical new feature, and, unlike most "radical new features", actually turn out to be pretty damn good. The instruction language is clear and obvious, with wonderful features like free arbitrary swizzle and negate on each operand, and the obvious things you want for graphics like dot product instructions.
|
38 |
+
|
39 |
+
The vertex program instructions are what SSE should have been.
|
40 |
+
|
41 |
+
A complex setup for a four-texture rendering pass is way easier to understand with a vertex program than with a ton of texgen/texture matrix calls, and it lets you do things that you just couldn't do hardware accelerated at all before. Changing the model from fixed function data like normals, colors, and texcoords to generalized attributes is very important for future progress.
|
42 |
+
|
43 |
+
Here, I think Microsoft and DX8 are providing a very good benefit by forcing a single vertex program interface down all the hardware vendor's throats.
|
44 |
+
|
45 |
+
This one is truly stunning: the drivers just worked for all the new features that I tried. I have tested a lot of pre-production 3D cards, and it has never been this smooth.
|
46 |
+
|
47 |
+
The things that are indifferent:
|
48 |
+
|
49 |
+
I'm still not a big believer in hardware accelerated curve tessellation. I'm not going to go over all the reasons again, but I would have rather seen the features left off and ended up with a cheaper part.
|
50 |
+
|
51 |
+
The shadow map support is good to get in, but I am still unconvinced that a fully general engine can be produced with acceptable quality using shadow maps for point lights. I spent a while working with shadow buffers last year, and I couldn't get satisfactory results. I will revisit that work now that I have GeForce 3 cards, and directly compare it with my current approach.
|
52 |
+
|
53 |
+
At high triangle rates, the index bandwidth can get to be a significant thing. Other cards that allow static index buffers as well as static vertex buffers will have situations where they provide higher application speed. Still, we do get great throughput on the GF3 using vertex array range and glDrawElements.
|
54 |
+
|
55 |
+
The things that are bad about it:
|
56 |
+
|
57 |
+
Vertex programs aren't invariant with the fixed function geometry paths. That means that you can't mix vertex program passes with normal passes in a multipass algorithm. This is annoying, and shouldn't have happened.
|
58 |
+
|
59 |
+
Now we come to the pixel shaders, where I have the most serious issues. I can just ignore this most of the time, but the way the pixel shader functionality turned out is painfully limited, and not what it should have been.
|
60 |
+
|
61 |
+
DX8 tries to pretend that pixel shaders live on hardware that is a lot more general than the reality.
|
62 |
+
|
63 |
+
Nvidia's OpenGL extensions expose things much more the way they actually are: the existing register combiners functionality extended to eight stages with a couple tweaks, and the texture lookup engine is configurable to interact between textures in a list of specific ways.
|
64 |
+
|
65 |
+
I'm sure it started out as a better design, but it apparently got cut and cut until it really looks like the old BumpEnvMap feature writ large: it does a few specific special effects that were deemed important, at the expense of a properly general solution.
|
66 |
+
|
67 |
+
Yes, it does full bumpy cubic environment mapping, but you still can't just do some math ops and look the result up in a texture. I was disappointed on this count with the Radeon as well, which was just slightly too hardwired to the DX BumpEnvMap capabilities to allow more general dependent texture use.
|
68 |
+
|
69 |
+
Enshrining the capabilities of this mess in DX8 sucks. Other companies had potentially better approaches, but they are now forced to dumb them down to the level of the GF3 for the sake of compatibility. Hopefully we can still see some of the extra flexibility in OpenGL extensions.
|
70 |
+
|
71 |
+
The future:
|
72 |
+
|
73 |
+
I think things are going to really clean up in the next couple years. All of my advocacy is focused on making sure that there will be a completely clean and flexible interface for me to target in the engine after DOOM, and I think it is going to happen.
|
74 |
+
|
75 |
+
The market may have shrunk to just ATI and Nvidia as significant players. Matrox, 3D labs, or one of the dormant companies may surprise us all, but the pace is pretty frantic.
|
76 |
+
|
77 |
+
I think I would be a little more comfortable if there was a third major player competing, but I can't fault Nvidia's path to success.
|
78 |
+
|
79 |
+
|
80 |
+
-----------------------------------------
|
81 |
+
John Carmack's .plan for Nov 16, 2001
|
82 |
+
-----------------------------------------
|
83 |
+
|
84 |
+
Driver optimizations have been discussed a lot lately because of the quake3 name checking in ATI's recent drivers, so I am going to lay out my position on the subject.
|
85 |
+
|
86 |
+
There are many driver optimizations that are pure improvements in all cases, with no negative effects. The difficult decisions come up when it comes to "trades" of various kinds, where a change will give an increase in performance, but at a cost.
|
87 |
+
|
88 |
+
Relative performance trades. Part of being a driver writer is being able to say "I don't care if stippled, anti-aliased points with texturing go slow", and optimizing accordingly. Some hardware features, like caches and hierarchical buffers, may be advantages on some apps, and disadvantages on others. Command buffer sizes often tune differently for different applications.
|
89 |
+
|
90 |
+
Quality trades. There is a small amount of wiggle room in the specs for pixel level variability, and some performance gains can be had by leaning towards the minimums. Most quality trades would actually be conformance trades, because the results are not exactly conformant, but they still do "roughly" the right thing from a visual standpoint. Compressing textures automatically, avoiding blending of very faint transparent pixels, using a 16 bit depth buffer, etc. A good application will allow the user to make most of these choices directly, but there is good call for having driver preference panels to enable these types of changes on naive applications. Many drivers now allow you to quality trade in an opposite manner - slowing application performance by turning on anti-aliasing or anisotropic texture filtering.
|
91 |
+
|
92 |
+
Conformance trades. Most conformance trades that happen with drivers are unintentional, where the slower, more general fallback case just didn't get called when it was supposed to, because the driver didn't check for a certain combination to exit some specially optimized path. However, there are optimizations that can give performance improvements in ways that make it impossible to remain conformant. For example, a driver could choose to skip storing of a color value before it is passed on to the hardware, which would save a few cycles, but make it impossible to correctly answer glGetFloatv( GL_CURRENT_COLOR, buffer ).
|
93 |
+
|
94 |
+
Normally, driver writers will just pick their priorities and make the trades, but sometimes there will be a desire to make different trades in different circumstances, so as to get the best of both worlds.
|
95 |
+
|
96 |
+
Explicit application hints are a nice way to offer different performance characteristics, but that requires cooperation from the application, so it doesn't help in an ongoing benchmark battle. OpenGL's glHint() call is the right thought, but not really set up as flexibly as you would like. Explicit extensions are probably the right way to expose performance trades, but it isn't clear to me that any conformant trade will be a big enough difference to add code for.
|
97 |
+
|
98 |
+
End-user selectable optimizations. Put a selection option in the driver properties window to allow the user to choose which application class they would like to be favored in some way. This has been done many times, and is a reasonable way to do things. Most users would never touch the setting, so some applications may be slightly faster or slower than in their "optimal benchmark mode".
|
99 |
+
|
100 |
+
Attempt to guess the application from app names, window strings, etc. Drivers are sometimes forced to do this to work around bugs in established software, and occasionally they will try to use this as a cue for certain optimizations.
|
101 |
+
|
102 |
+
My positions:
|
103 |
+
|
104 |
+
Making any automatic optimization based on a benchmark name is wrong. It subverts the purpose of benchmarking, which is to gauge how a similar class of applications will perform on a tested configuration, not just how the single application chosen as representative performs.
|
105 |
+
|
106 |
+
It is never acceptable to have the driver automatically make a conformance tradeoff, even if they are positive that it won't make any difference. The reason is that applications evolve, and there is no guarantee that a future release won't have different assumptions, causing the upgrade to misbehave. We have seen this in practice with Quake3 and derivatives, where vendors assumed something about what may or may not be enabled during a compiled vertex array call. Most of these are just mistakes, or, occasionally, laziness.
|
107 |
+
|
108 |
+
Allowing a driver to present a non-conformant option for the user to select is an interesting question. I know that as a developer, I would get hate mail from users when a point release breaks on their whiz-bang optimized driver, just like I do with overclocked CPUs, and I would get the same "but it works with everything else!" response when I tell them to put it back to normal. On the other hand, being able to tweak around with that sort of think is fun for technically inclined users. I lean towards frowning on it, because it is a slippery slope from there down in to "cheating drivers" of the see-through- walls variety.
|
109 |
+
|
110 |
+
Quality trades are here to stay, with anti-aliasing, anisotropic texture filtering, and other options being positive trades that a user can make, and allowing various texture memory optimizations can be a very nice thing for a user trying to get some games to work well. However, it is still important that it start from a completely conformant state by default. This is one area where application naming can be used reasonably by the driver, to maintain user selected per-application modifiers.
|
111 |
+
|
112 |
+
I'm not fanatical on any of this, because the overriding purpose of software is to be useful, rather than correct, but the days of game-specific mini- drivers that can just barely cut it are past, and we should demand more from the remaining vendors.
|
113 |
+
|
114 |
+
Also, excessive optimization is the cause of quite a bit of ill user experience with computers. Byzantine code paths extract costs as long as they exist, not just as they are written.
|
115 |
+
|
116 |
+
|
117 |
+
-----------------------------------------
|
118 |
+
John Carmack's .plan for Dec 21, 2001
|
119 |
+
-----------------------------------------
|
120 |
+
|
121 |
+
The Quake 2 source code is now available for download, licensed under the GPL.
|
122 |
+
|
123 |
+
ftp://ftp.idsoftware.com/idstuff/source/quake2.zip
|
124 |
+
|
125 |
+
As with previous source code releases, the game data remains under the original copyright and license, and cannot be freely distributed. If you create a true total conversion, you can give (or sell) a complete package away, as long as you abide by the GPL source code license. If your projects use the original Quake 2 media, the media must come from a normal, purchased copy of the game.
|
126 |
+
|
127 |
+
I'm sure I will catch some flack about increased cheating after the source release, but there are plenty of Q2 cheats already out there, so you are already in the position of having to trust the other players to a degree. The problem is really only solvable by relying on the community to police itself, because it is a fundamentally unwinnable technical battle to make a completely cheat proof game of this type. Play with your friends.
|
128 |
+
|
129 |
+
|
johnc_plan_2002.txt
ADDED
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for Feb 11, 2002
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
Last month I wrote the Radeon 8500 support for Doom.
|
6 |
+
|
7 |
+
The bottom line is that it will be a fine card for the game, but the details are sort of interesting.
|
8 |
+
|
9 |
+
I had a pre-production board before Siggraph last year, and we were discussing the possibility of letting ATI show a Doom demo behind closed doors on it. We were all very busy at the time, but I took a shot at bringing up support over a weekend. I hadn't coded any of the support for the custom ATI extensions yet, but I ran the game using only standard OpenGL calls (this is not a supported path, because without bump mapping everything looks horrible) to see how it would do. It didn't even draw the console correctly, because they had driver bugs with texGen. I thought the odds were very long against having all the new, untested extensions working properly, so I pushed off working on it until they had revved the drivers a few more times.
|
10 |
+
|
11 |
+
My judgment was colored by the experience of bringing up Doom on the original Radeon card a year earlier, which involved chasing a lot of driver bugs. Note that ATI was very responsive, working closely with me on it, and we were able to get everything resolved, but I still had no expectation that things would work correctly the first time.
|
12 |
+
|
13 |
+
Nvidia's OpenGL drivers are my "gold standard", and it has been quite a while since I have had to report a problem to them, and even their brand new extensions work as documented the first time I try them. When I have a problem on an Nvidia, I assume that it is my fault. With anyone else's drivers, I assume it is their fault. This has turned out correct almost all the time. I have heard more anecdotal reports of instability on some systems with Nivida drivers recently, but I track stability separately from correctness, because it can be influenced by so many outside factors.
|
14 |
+
|
15 |
+
ATI had been patiently pestering me about support for a few months, so last month I finally took another stab at it. The standard OpenGL path worked flawlessly, so I set about taking advantage of all the 8500 specific features. As expected, I did run into more driver bugs, but ATI got me fixes rapidly, and we soon had everything working properly. It is interesting to contrast the Nvidia and ATI functionality:
|
16 |
+
|
17 |
+
The vertex program extensions provide almost the same functionality. The ATI hardware is a little bit more capable, but not in any way that I care about. The ATI extension interface is massively more painful to use than the text parsing interface from nvidia. On the plus side, the ATI vertex programs are invariant with the normal OpenGL vertex processing, which allowed me to reuse a bunch of code. The Nvidia vertex programs can't be used in multipass algorithms with standard OpenGL passes, because they generate tiny differences in depth values, forcing you to implement EVERYTHING with vertex programs. Nvidia is planning on making this optional in the future, at a slight speed cost.
|
18 |
+
|
19 |
+
I have mixed feelings about the vertex object / vertex array range extensions. ATI's extension seems more "right" in that it automatically handles synchronization by default, and could be implemented as a wire protocol, but there are advantages to the VAR extension being simply a hint. It is easy to have a VAR program just fall back to normal virtual memory by not setting the hint and using malloc, but ATI's extension requires different function calls for using vertex objects and normal vertex arrays.
|
20 |
+
|
21 |
+
The fragment level processing is clearly way better on the 8500 than on the Nvidia products, including the latest GF4. You have six individual textures, but you can access the textures twice, giving up to eleven possible texture accesses in a single pass, and the dependent texture operation is much more sensible. This wound up being a perfect fit for Doom, because the standard path could be implemented with six unique textures, but required one texture (a normalization cube map) to be accessed twice. The vast majority of Doom light / surface interaction rendering will be a single pass on the 8500, in contrast to two or three passes, depending on the number of color components in a light, for GF3/GF4 (*note GF4 bitching later on).
|
22 |
+
|
23 |
+
Initial performance testing was interesting. I set up three extreme cases to exercise different characteristics:
|
24 |
+
|
25 |
+
A test of the non-textured stencil shadow speed showed a GF3 about 20% faster than the 8500. I believe that Nvidia has a slightly higher performance memory architecture.
|
26 |
+
|
27 |
+
A test of light interaction speed initially had the 8500 significantly slower than the GF3, which was shocking due to the difference in pass count. ATI identified some driver issues, and the speed came around so that the 8500 was faster in all combinations of texture attributes, in some cases 30+% more. This was about what I expected, given the large savings in memory traffic by doing everything in a single pass.
|
28 |
+
|
29 |
+
A high polygon count scene that was more representative of real game graphics under heavy load gave a surprising result. I was expecting ATI to clobber Nvidia here due to the much lower triangle count and MUCH lower state change functional overhead from the single pass interaction rendering, but they came out slower. ATI has identified an issue that is likely causing the unexpected performance, but it may not be something that can be worked around on current hardware.
|
30 |
+
|
31 |
+
I can set up scenes and parameters where either card can win, but I think that current Nvidia cards are still a somewhat safer bet for consistent performance and quality.
|
32 |
+
|
33 |
+
On the topic of current Nvidia cards:
|
34 |
+
|
35 |
+
Do not buy a GeForce4-MX for Doom.
|
36 |
+
|
37 |
+
Nvidia has really made a mess of the naming conventions here. I always thought it was bad enough that GF2 was just a speed bumped GF1, while GF3 had significant architectural improvements over GF2. I expected GF4 to be the speed bumped GF3, but calling the NV17 GF4-MX really sucks.
|
38 |
+
|
39 |
+
GF4-MX will still run Doom properly, but it will be using the NV10 codepath with only two texture units and no vertex shaders. A GF3 or 8500 will be much better performers. The GF4-MX may still be the card of choice for many people depending on pricing, especially considering that many games won't use four textures and vertex programs, but damn, I wish they had named it something else.
|
40 |
+
|
41 |
+
As usual, there will be better cards available from both Nvidia and ATI by the time we ship the game.
|
42 |
+
|
43 |
+
8:50 pm addendum: Mark Kilgard at Nvidia said that the current drivers already support the vertex program option to be invarint with the fixed function path, and that it turned out to be one instruction FASTER, not slower.
|
44 |
+
|
45 |
+
|
46 |
+
-----------------------------------------
|
47 |
+
John Carmack's .plan for Mar 15, 2002
|
48 |
+
-----------------------------------------
|
49 |
+
|
50 |
+
Mark Kilgard and Cass Everitt at Nvidia have released a paper on shadow volume rendering with several interesting bits in it. They also include a small document that I wrote a couple years ago about my discovery process during the development of some of the early Doom technology.
|
51 |
+
|
52 |
+
http://developer.nvidia.com/view.asp?IO=robust_shadow_volumes
|
53 |
+
|
54 |
+
|
55 |
+
-----------------------------------------
|
56 |
+
John Carmack's .plan for Jun 25, 2002
|
57 |
+
-----------------------------------------
|
58 |
+
|
59 |
+
The Matrox Parhelia Report:
|
60 |
+
|
61 |
+
The executive summary is that the Parhelia will run Doom, but it is not performance competitive with Nvidia or ATI.
|
62 |
+
|
63 |
+
Driver issue remain, so it is not perfect yet, but I am confident that Matrox will resolve them.
|
64 |
+
|
65 |
+
The performance was really disappointing for the first 256 bit DDR card. I tried to set up a "poster child" case that would stress the memory subsystem above and beyond any driver or triangle level inefficiencies, but I was unable to get it to ever approach the performance of a GF4.
|
66 |
+
|
67 |
+
The basic hardware support is good, with fragment flexibility better than GF4 (but not as good as ATI 8500), but it just doesn't keep up in raw performance. With a die shrink, this chip could probably be a contender, but there are probably going to be other chips out by then that will completely eclipse this generation of products.
|
68 |
+
|
69 |
+
None of the special features will be really useful for Doom:
|
70 |
+
|
71 |
+
The 10 bit color framebuffer is nice, but Doom needs more than 2 bits of destination alpha when a card only has four texture units, so we can't use it.
|
72 |
+
|
73 |
+
Anti aliasing features are nice, but it isn't all that fast in minimum feature mode, so nobody is going to be turning on AA. The same goes for "surround gaming". While the framerate wouldn't be 1/3 the base, it would still probably be cut in half.
|
74 |
+
|
75 |
+
Displacement mapping. Sigh. I am disappointed that the industry is still pursuing any quad based approaches. Haven't we learned from the stellar success of 3DO, Saturn, and NV1 that quads really suck? In any case, we can't use any geometry amplification scheme (including ATI's truform) in conjunction with stencil shadow volumes.
|
76 |
+
|
77 |
+
|
78 |
+
-----------------------------------------
|
79 |
+
John Carmack's .plan for Jun 27, 2002
|
80 |
+
-----------------------------------------
|
81 |
+
|
82 |
+
More graphics card notes:
|
83 |
+
|
84 |
+
I need to apologize to Matrox - their implementation of hardware displacement mapping is NOT quad based. I was thinking about a certain other companies proposed approach. Matrox's implementation actually looks quite good, so even if we don't use it because of the geometry amplification issues, I think it will serve the noble purpose of killing dead any proposal to implement a quad based solution.
|
85 |
+
|
86 |
+
I got a 3Dlabs P10 card in last week, and yesterday I put it through its paces. Because my time is fairly over committed, first impressions often determine how much work I devote to a given card. I didn't speak to ATI for months after they gave me a beta 8500 board last year with drivers that rendered the console incorrectly. :)
|
87 |
+
|
88 |
+
I was duly impressed when the P10 just popped right up with full functional support for both the fallback ARB_ extension path (without specular highlights), and the NV10 NVidia register combiners path. I only saw two issues that were at all incorrect in any of our data, and one of them is debatable. They don't support NV_vertex_program_1_1, which I use for the NV20 path, and when I hacked my programs back to 1.0 support for testing, an issue did show up, but still, this is the best showing from a new board from any company other than Nvidia.
|
89 |
+
|
90 |
+
It is too early to tell what the performance is going to be like, because they don't yet support a vertex object extension, so the CPU is hand feeding all the vertex data to the card at the moment. It was faster than I expected for those circumstances.
|
91 |
+
|
92 |
+
Given the good first impression, I was willing to go ahead and write a new back end that would let the card do the entire Doom interaction rendering in a single pass. The most expedient sounding option was to just use the Nvidia extensions that they implement, NV_vertex_program and NV_register_combiners, with seven texture units instead of the four available on GF3/GF4. Instead, I decided to try using the prototype OpenGL 2.0 extensions they provide.
|
93 |
+
|
94 |
+
The implementation went very smoothly, but I did run into the limits of their current prototype compiler before the full feature set could be implemented. I like it a lot. I am really looking forward to doing research work with this programming model after the compiler matures a bit. While the shading languages are the most critical aspects, and can be broken out as extensions to current OpenGL, there are a lot of other subtle-but-important things that are addressed in the full OpenGL 2.0 proposal.
|
95 |
+
|
96 |
+
I am now committed to supporting an OpenGL 2.0 renderer for Doom through all the spec evolutions. If anything, I have been somewhat remiss in not pushing the issues as hard as I could with all the vendors. Now really is the critical time to start nailing things down, and the decisions may stay with us for ten years.
|
97 |
+
|
98 |
+
A GL2 driver won't give any theoretical advantage over the current back ends optimized for cards with 7+ texture capability, but future research work will almost certainly be moving away from the lower level coding practices, and if some new vendor pops up (say, Rendition back from the dead) with a next-gen card, I would strongly urge them to implement GL2 instead of proprietary extensions.
|
99 |
+
|
100 |
+
I have not done a detailed comparison with Cg. There are a half dozen C-like graphics languages floating around, and honestly, I don't think there is a hell of a lot of usability difference between them at the syntax level. They are all a whole lot better than the current interfaces we are using, so I hope syntax quibbles don't get too religious. It won't be too long before all real work is done in one of these, and developers that stick with the lower level interfaces will be regarded like people that write all-assembly PC applications today. (I get some amusement from the all-assembly crowd, and it can be impressive, but it is certainly not effective)
|
101 |
+
|
102 |
+
I do need to get up on a soapbox for a long discourse about why the upcoming high level languages MUST NOT have fixed, queried resource limits if they are going to reach their full potential. I will go into a lot of detail when I get a chance, but drivers must have the right and responsibility to multipass arbitrarily complex inputs to hardware with smaller limits. Get over it.
|
103 |
+
|
104 |
+
|
johnc_plan_2003.txt
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for Jan 29, 2003
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
NV30 vs R300, current developments, etc
|
6 |
+
|
7 |
+
At the moment, the NV30 is slightly faster on most scenes in Doom than the R300, but I can still find some scenes where the R300 pulls a little bit ahead. The issue is complicated because of the different ways the cards can choose to run the game.
|
8 |
+
|
9 |
+
The R300 can run Doom in three different modes: ARB (minimum extensions, no specular highlights, no vertex programs), R200 (full featured, almost always single pass interaction rendering), ARB2 (floating point fragment shaders, minor quality improvements, always single pass).
|
10 |
+
|
11 |
+
The NV30 can run DOOM in five different modes: ARB, NV10 (full featured, five rendering passes, no vertex programs), NV20 (full featured, two or three rendering passes), NV30 ( full featured, single pass), and ARB2.
|
12 |
+
|
13 |
+
The R200 path has a slight speed advantage over the ARB2 path on the R300, but only by a small margin, so it defaults to using the ARB2 path for the quality improvements. The NV30 runs the ARB2 path MUCH slower than the NV30 path. Half the speed at the moment. This is unfortunate, because when you do an exact, apples-to-apples comparison using exactly the same API, the R300 looks twice as fast, but when you use the vendor-specific paths, the NV30 wins.
|
14 |
+
|
15 |
+
The reason for this is that ATI does everything at high precision all the time, while Nvidia internally supports three different precisions with different performances. To make it even more complicated, the exact precision that ATI uses is in between the floating point precisions offered by Nvidia, so when Nvidia runs fragment programs, they are at a higher precision than ATI's, which is some justification for the slower speed. Nvidia assures me that there is a lot of room for improving the fragment program performance with improved driver compiler technology.
|
16 |
+
|
17 |
+
The current NV30 cards do have some other disadvantages: They take up two slots, and when the cooling fan fires up they are VERY LOUD. I'm not usually one to care about fan noise, but the NV30 does annoy me.
|
18 |
+
|
19 |
+
I am using an NV30 in my primary work system now, largely so I can test more of the rendering paths on one system, and because I feel Nvidia still has somewhat better driver quality (ATI continues to improve, though). For a typical consumer, I don't think the decision is at all clear cut at the moment.
|
20 |
+
|
21 |
+
For developers doing forward looking work, there is a different tradeoff - the NV30 runs fragment programs much slower, but it has a huge maximum instruction count. I have bumped into program limits on the R300 already.
|
22 |
+
|
23 |
+
As always, better cards are coming soon.
|
24 |
+
|
25 |
+
-----
|
26 |
+
|
27 |
+
Doom has dropped support for vendor-specific vertex programs (NV_vertex_program and EXT_vertex_shader), in favor of using ARB_vertex_program for all rendering paths. This has been a pleasant thing to do, and both ATI and Nvidia supported the move. The standardization process for ARB_vertex_program was pretty drawn out and arduous, but in the end, it is a just-plain-better API than either of the vendor specific ones that it replaced. I fretted for a while over whether I should leave in support for the older APIs for broader driver compatibility, but the final decision was that we are going to require a modern driver for the game to run in the advanced modes. Older drivers can still fall back to either the ARB or NV10 paths.
|
28 |
+
|
29 |
+
The newly-ratified ARB_vertex_buffer_object extension will probably let me do the same thing for NV_vertex_array_range and ATI_vertex_array_object.
|
30 |
+
|
31 |
+
Reasonable arguments can be made for and against the OpenGL or Direct-X style of API evolution. With vendor extensions, you get immediate access to new functionality, but then there is often a period of squabbling about exact feature support from different vendors before an industry standard settles down. With central planning, you can have "phasing problems" between hardware and software releases, and there is a real danger of bad decisions hampering the entire industry, but enforced commonality does make life easier for developers. Trying to keep boneheaded-ideas-that-will-haunt-us-for-years out of Direct-X is the primary reason I have been attending the Windows Graphics Summit for the past three years, even though I still code for OpenGL.
|
32 |
+
|
33 |
+
The most significant functionality in the new crop of cards is the truly flexible fragment programming, as exposed with ARB_fragment_program. Moving from the "switches and dials" style of discrete functional graphics programming to generally flexible programming with indirection and high precision is what is going to enable the next major step in graphics engines.
|
34 |
+
|
35 |
+
It is going to require fairly deep, non-backwards-compatible modifications to an engine to take real advantage of the new features, but working with ARB_fragment_program is really a lot of fun, so I have added a few little tweaks to the current codebase on the ARB2 path:
|
36 |
+
|
37 |
+
High dynamic color ranges are supported internally, rather than with post-blending. This gives a few more bits of color precision in the final image, but it isn't something that you really notice.
|
38 |
+
|
39 |
+
Per-pixel environment mapping, rather than per-vertex. This fixes a pet-peeve of mine, which is large panes of environment mapped glass that aren't tessellated enough, giving that awful warping-around-the-triangulation effect as you move past them.
|
40 |
+
|
41 |
+
Light and view vectors normalized with math, rather than a cube map. On future hardware this will likely be a performance improvement due to the decrease in bandwidth, but current hardware has the computation and bandwidth balanced such that it is pretty much a wash. What it does (in conjunction with floating point math) give you is a perfectly smooth specular highlight, instead of the pixelish blob that we get on older generations of cards.
|
42 |
+
|
43 |
+
There are some more things I am playing around with, that will probably remain in the engine as novelties, but not supported features:
|
44 |
+
|
45 |
+
Per-pixel reflection vector calculations for specular, instead of an interpolated half-angle. The only remaining effect that has any visual dependency on the underlying geometry is the shape of the specular highlight. Ideally, you want the same final image for a surface regardless of if it is two giant triangles, or a mesh of 1024 triangles. This will not be true if any calculation done at a vertex involves anything other than linear math operations. The specular half-angle calculation involves normalizations, so the interpolation across triangles on a surface will be dependent on exactly where the vertexes are located. The most visible end result of this is that on large, flat, shiny surfaces where you expect a clean highlight circle moving across it, you wind up with a highlight that distorts into an L shape around the triangulation line.
|
46 |
+
|
47 |
+
The extra instructions to implement this did have a noticeable performance hit, and I was a little surprised to see that the highlights not only stabilized in shape, but also sharpened up quite a bit, changing the scene more than I expected. This probably isn't a good tradeoff today for a gamer, but it is nice for any kind of high-fidelity rendering.
|
48 |
+
|
49 |
+
Renormalization of surface normal map samples makes significant quality improvements in magnified textures, turning tight, blurred corners into shiny, smooth pockets, but it introduces a huge amount of aliasing on minimized textures. Blending between the cases is possible with fragment programs, but the performance overhead does start piling up, and it may require stashing some information in the normal map alpha channel that varies with mip level. Doing good filtering of a specularly lit normal map texture is a fairly interesting problem, with lots of subtle issues.
|
50 |
+
|
51 |
+
Bump mapped ambient lighting will give much better looking outdoor and well-lit scenes. This only became possible with dependent texture reads, and it requires new designer and tool-chain support to implement well, so it isn't easy to test globally with the current Doom datasets, but isolated demos are promising.
|
52 |
+
|
53 |
+
The future is in floating point framebuffers. One of the most noticeable thing this will get you without fundamental algorithm changes is the ability to use a correct display gamma ramp without destroying the dark color precision. Unfortunately, using a floating point framebuffer on the current generation of cards is pretty difficult, because no blending operations are supported, and the primary thing we need to do is add light contributions together in the framebuffer. The workaround is to copy the part of the framebuffer you are going to reference to a texture, and have your fragment program explicitly add that texture, instead of having the separate blend unit do it. This is intrusive enough that I probably won't hack up the current codebase, instead playing around on a forked version.
|
54 |
+
|
55 |
+
Floating point framebuffers and complex fragment shaders will also allow much better volumetric effects, like volumetric illumination of fogged areas with shadows and additive/subtractive eddy currents.
|
56 |
+
|
57 |
+
John Carmack
|
58 |
+
|
59 |
+
|
60 |
+
-----------------------------------------
|
61 |
+
John Carmack's .plan for Feb 07, 2003
|
62 |
+
-----------------------------------------
|
63 |
+
|
64 |
+
The machinima music video that Fountainhead Entertainment (my wife's company) produced with Quake based tools is available for viewing and voting on at: http://www.mtv.com/music/viewers_pick/ ("In the waiting line")
|
65 |
+
|
66 |
+
I thought they did an excellent job of catering to the strengths of the medium, and not attempting to make a game engine compete (poorly) as a general purpose renderer. In watching the video, I did beat myself up a bit over the visible popping artifacts on the environment mapping, which are a direct result of the normal vector quantization in the md3 format. While it isn't the same issue (normals are full floating point already in Doom), it was the final factor that pushed me to do the per-pixel environment mapping for the new cards in the current engine.
|
67 |
+
|
68 |
+
The neat thing about the machinima aspect of the video is that they also have a little game you can play with the same media assets used to create the video. Not sure when it will be made available publicly.
|
69 |
+
|
70 |
+
|
johnc_plan_2004.txt
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for Dec 31, 2004
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
Welcome
|
6 |
+
|
7 |
+
I get a pretty steady trickle of emails from people hoping for .plan file updates. There were two main factors involved in my not doing updates for a long time - a good chunk of my time and interest was sucked into Armadillo Aerospace, and the fact that the work I had been doing at Id for the last half of Doom 3 development was basically pretty damn boring.
|
8 |
+
|
9 |
+
The Armadillo work has been very rewarding from a learning-lots-of-new-stuff perspective, and I'm still committed to the vehicle development, even post X-Prize, but the work at Id is back to a high level of interest now that we are working on a new game with new technology. I keep running across topics that are interesting to talk about, and the Armadillo updates have been a pretty good way for me to organize my thoughts, so I'm going to give it a more general try here. .plan files were appropriate ten years ago, and sort of retro-cute several years ago, but I'll be sensible and use the web.
|
10 |
+
|
11 |
+
I'm not quite sure what the tone is going to be - there will probably be some general interest stuff, but a bunch of things will only be of interest to hardcore graphics geeks.
|
12 |
+
|
13 |
+
I have had some hesitation about doing this because there are a hundred times as many people interested in listening to me talk about games / graphics / computers as there are people interested in rocket fabrication, and my mailbox is already rather time consuming to get through.
|
14 |
+
|
15 |
+
If you really, really want to email me, add a "[JC]" in the subject header so the mail gets filtered to a mailbox that isn't clogged with spam. I can't respond to most of the email I get, but I do read everything that doesn't immediately scan as spam. Unfortunately, the probability of getting an answer from me doesn't have a lot of correlation with the quality of the question, because what I am doing at the instant I read it is more dominant, and there is even a negative correlation for "deep" questions that I don't want to make an off-the-cuff response to.
|
16 |
+
|
17 |
+
Quake 3 Source
|
18 |
+
|
19 |
+
I intended to release the Q3 source under the GPL by the end of 2004, but we had another large technology licensing deal go through, and it would be poor form to make the source public a few months after a company paid hundreds of thousands of dollars for full rights to it. True, being public under the GPL isn't the same as having a royalty free license without the need to disclose the source, but I'm pretty sure there would be some hard feelings.
|
20 |
+
|
21 |
+
Previous source code releases were held up until the last commercial license of the technology shipped, but with the evolving nature of game engines today, it is a lot less clear. There are still bits of early Quake code in Half Life 2, and the remaining licensees of Q3 technology intend to continue their internal developments along similar lines, so there probably won't be nearly as sharp a cutoff as before. I am still committed to making as much source public as I can, and I won't wait until the titles from the latest deal have actually shipped, but it is still going to be a little while before I feel comfortable doing the release.
|
22 |
+
|
23 |
+
Random Graphics Thoughts
|
24 |
+
|
25 |
+
Years ago, when I first heard about the inclusion of derivative instructions in fragment programs, I couldn't think of anything off hand that I wanted them for. As I start working on a new generation of rendering code, uses for them come up a lot more often than I expected.
|
26 |
+
|
27 |
+
I can't actually use them in our production code because it is an Nvidia-only feature at the moment, but it is convenient to do experimental code with the nv_fragment_program extension before figuring out various ways to build funny texture mip maps so that the built in texture filtering hardware calculates a value somewhat like the derivative I wanted.
|
28 |
+
|
29 |
+
If you are basically just looking for plane information, as you would for modifying things with texture magnification or stretching shadow buffer filter kernels, the derivatives work out pretty well. However, if you are looking at a derived value, like a normal read from a texture, the results are almost useless because of the way they are calculated. In an ideal world, all of the samples to be differenced would be calculated at once, then the derivatives calculated from there, but the hardware only calculates 2x2 blocks at a time. Each of the four pixels in the block is given the same derivative, and there is no influence from neighboring pixels. This gives derivative information that is basically half the resolution of the screen and sort of point sampled. You can often see this effect with bump mapped environment mapping into a mip-mapped cube map, where the texture LOD changes discretely along the 2x2 blocks. Explicitly coloring based on the derivatives of a normal map really shows how nasty the calculated value is.
|
30 |
+
|
31 |
+
Speaking of bump mapped environment sampling.. I spent a little while tracking down a highlight that I thought was misplaced. In retrospect it is obvious, but I never considered the artifact before: With a bump mapped surface, some of the on-screen normals will actually be facing away from the viewer. This causes minor problems with lighting, but when you are making a reflection vector from it, the vector starts reflecting into the opposite hemisphere, resulting in some sky-looking pixels near bottom edges on the model. Clamping the surface normal to not face away isn't a good solution, because you get areas that "see right through" to the environment map, because a reflection past a clamped perpendicular vector doesn't change the viewing vector. I could probably ramp things based on the geometric normal somewhat, and possibly pre-calculate some data into the normal maps, but I decided it wasn't a significant enough issue to be worth any more development effort or speed hit.
|
32 |
+
|
33 |
+
Speaking of cube maps.. The edge filtering on cube maps is showing up as an issue for some algorithms. The hardware basically picks a face, then treats it just like a 2D texture. This is fine in the middle of the texture, but at the edges (which are a larger and larger fraction as size decreases) the filter kernel just clamps instead of being able to sample the neighbors in an adjacent cube face. This is generally a non-issue for classic environment mapping, but when you start using cube map lookups with explicit LOD bias inputs (say, to simulate variable specular powers into an environment map) you can wind up with a surface covered with six constant color patches instead of the smoothly filtered coloration you want. The classic solution would be to implement border texels, but that is pretty nasty for the hardware and API, and would require either the application or the driver to actually copy the border texels from all the other faces. Last I heard, upcoming hardware was going to start actually fetching from the other side textures directly. A second-tier chip company claimed to do this correctly a while ago, but I never actually tested it.
|
34 |
+
|
35 |
+
Topics continue to chain together, I'll probably write some more next week.
|
36 |
+
|
37 |
+
|
johnc_plan_2005.txt
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for May 27, 2005
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
Cell phone adventures
|
6 |
+
|
7 |
+
I'm not a cell phone guy. I resisted getting one at all for years, and even now I rarely carry it. To a first approximation, I don't really like talking to most people, so I don't go out of my way to enable people to call me. However, a little while ago I misplaced the old phone I usually take to Armadillo, and my wife picked up a more modern one for me. It had a nice color screen and a bunch of bad java game demos on it. The bad java games did it.
|
8 |
+
|
9 |
+
I am a big proponent of temporarily changing programming scope every once in a while to reset some assumptions and habits. After Quake 3, I spent some time writing driver code for the Utah-GLX project to give myself more empathy for the various hardware vendors and get back to some low-level register programming. This time, I decided I was going to work on a cell phone game.
|
10 |
+
|
11 |
+
I wrote a couple java programs several years ago, and I was left with a generally favorable impression of the language. I dug out my old "java in a nutshell" and started browsing around on the web for information on programming for cell phones. After working my way through the alphabet soup of J2ME, CLDC, and MIDP, I've found that writing for the platform is pretty easy.
|
12 |
+
|
13 |
+
In fact, I think it would be an interesting environment for beginning programmers to learn on. I started programming on an Apple II a long time ago, when you could just do an "hgr" and start drawing to the screen, which was rewarding. For years, I've had misgivings about people learning programming on Win32 (unix / X would be even worse), where it takes a lot of arcane crap just to get to the point of drawing something on the screen and responding to input. I assume most beginners wind up with a lot of block copied code that they don't really understand.
|
14 |
+
|
15 |
+
All the documentation and tools needed are free off the web, and there is an inherent neatness to being able to put the program on your phone and walk away from the computer. I wound up using the latest release of NetBeans with the mobility module, which works pretty well. It certainly isn't MSDev, but for a free IDE it seems very capable. On the downside, MIDP debugging sessions are very flaky, and there is something deeply wrong when text editing on a 3.6 ghz processor is anything but instantaneous.
|
16 |
+
|
17 |
+
I spent a while thinking about what would actually make a good game for the platform, which is a very different design space than PCs or consoles. The program and data sizes are tiny, under 200k for java jar files. A single texture is larger than that in our mainstream games. The data sizes to screen ratios are also far out of the range we are used to. A 128x128x16+ bit color screen can display some very nice graphics, but you could only store a half dozen uncompressed screens in your entire size budget. Contrast with PCs, which may be up to a few megabytes of display data, but the total game data may be five hundred times that.
|
18 |
+
|
19 |
+
You aren't going to be able to make an immersive experience on a 2" screen, no matter what the graphics look like. Moody and atmospheric are pretty much out. Stylish and fun is about the best you can do.
|
20 |
+
|
21 |
+
The standard cell phone style discrete button direction pad with a center action button is a good interface for one handed navigation and selection, but it sucks for games, where you really want a game boy style rocking direction pad for one thumb, and a couple separate action buttons for the other thumb. These styles of input are in conflict with each other, so it may never get any better. The majority of traditional action games just don't work well with cell phone style input.
|
22 |
+
|
23 |
+
Network packet latency is bad, and not expected to be improving in the foreseeable future, so multiplayer action games are pretty much out (but see below).
|
24 |
+
|
25 |
+
I have a small list of games that I think would work out well, but what I decided to work on is DoomRPG - sort of Bard's Tale meets Doom. Step based smooth sliding/turning tile movement and combat works out well for the phone input buttons, and exploring a 3D world through the cell phone window is pretty neat. We talked to Jamdat about the business side of things, and hired Fountainhead Entertainment to turn my proof-of-concept demo and game plans into a full-featured game.
|
26 |
+
|
27 |
+
So, for the past month or so I have been spending about a day a week on cell phone development. Somewhat to my surprise, there is very little internal conflict switching off from the high end work during the day with gigs of data and multi-hundred instruction fragment shaders down to texture mapping in java at night with one table lookup per pixel and 100k of graphics. It's all just programming and design work.
|
28 |
+
|
29 |
+
It turns out that I'm a lot less fond of Java for resource-constrained work. I remember all the little gripes I had with the Java language, like no unsigned bytes, and the consequences of strong typing, like no memset, and the inability to read resources into anything but a char array, but the frustrating issues are details down close to the hardware.
|
30 |
+
|
31 |
+
The biggest problem is that Java is really slow. On a pure cpu / memory / display / communications level, most modern cell phones should be considerably better gaming platforms than a Game Boy Advanced. With Java, on most phones you are left with about the CPU power of an original 4.77 mhz IBM PC, and lousy control over everything.
|
32 |
+
|
33 |
+
I spent a fair amount of time looking at java byte code disassembly while optimizing my little rendering engine. This is interesting fun like any other optimization problem, but it alternates with a bleak knowledge that even the most inspired java code is going to be a fraction the performance of pedestrian native C code.
|
34 |
+
|
35 |
+
Even compiled to completely native code, Java semantic requirements like range checking on every array access hobble it. One of the phones (Motorola i730) has an option that does some load time compiling to improve performance, which does help a lot, but you have no idea what it is doing, and innocuous code changes can cause the compilable heuristic to fail.
|
36 |
+
|
37 |
+
Write-once-run-anywhere. Ha. Hahahahaha. We are only testing on four platforms right now, and not a single pair has the exact same quirks. All the commercial games are tweaked and compiled individually for each (often 100+) platform. Portability is not a justification for the awful performance.
|
38 |
+
|
39 |
+
Security on a cell phone is justification for doing something, but an interpreter isn't a requirement - memory management units can do just as well. I suspect this did have something to do with Java's adoption early on. A simple embedded processor with no MMU could run arbitrary programs securely with java, which might make it the only practical option. However, once you start using blazingly fast processors to improve the awful performance, a MMU with a classic OS model looks a whole lot better.
|
40 |
+
|
41 |
+
Even saddled with very low computing performance, tighter implementation of the platform interface could help out a lot. I'm not seeing very conscientious work on the platforms so far. For instance, there is just no excuse for having 10+ millisecond granularity in timing. Given that the java paradigm is sort of thread-happy anyway, having a real scheduler that Does The Right Thing with priorities and hardware interfacing would be an obvious thing. Pressing a key should generate a hardware interrupt, which should immediately activate the key listening thread, which should be able to immediately kill an in-process rendering and restart another one if desired. The attitude seems to be 15 msec here, 20 there, stick it on a queue, finish up a timeslice, who cares, right?
|
42 |
+
|
43 |
+
I suspect I will enjoy working with BREW, the competing standard for cell phone games. It lets you use raw C/C++ code, or even, I suppose, assembly language, which completely changes the design options. Unfortunately, they only have a quarter the market share that the J2ME phones have. Also, the relatively open java platform development strategy is what got me into this in the first place - one night I just tried writing a program for my cell phone, which isn't possible for the more proprietary BREW platform.
|
44 |
+
|
45 |
+
I have a serious suggestion for the handset designers to go with my idle bitching. I have been told that fixing data packet latency is apparently not in the cards, and it isn't even expected to improve much with the change to 3G infrastructure. Packet data communication seems more modern, and has the luster of the web, but it is worth realizing that for network games and many other flashy Internet technologies like streaming audio and video, we use packets to rather inefficiently simulate a switched circuit.
|
46 |
+
|
47 |
+
Cell phones already have a very low latency digital data path - the circuit switched channel used for voice. Some phones have included cellular modems that use either the CSD standard (circuit switched data) at 9.8Kbits or 14.4Kbits or the HSCSD standard (high speed circuit switched data) at 38.4Kbits or 57.6Kbits. Even the 9.8Kbit speed would be great for networked games. A wide variety of two player peer-to-peer games and multiplayer packet server based games could be implemented over this with excellent performance. Gamers generally have poor memories of playing over even the highest speed analog modems, but most of the problems are due to having far too many buffers and abstractions between the data producers/consumers and the actual wire interface. If you wrote eight bytes to the device and it went in the next damned frame (instead of the OS buffer, which feeds into a serial FIFO, which goes into another serial FIFO, which goes into a data compressor, which goes into an error corrector, and probably a few other things before getting into a wire frame), life would be quite good. If you had a real time scheduler, a single frame buffer would be sufficient, but since that isn't likely to happen, having an OS buffer with accurate queries of the FIFO positions is probably best. The worst gaming experiences with modems weren't due to bandwidth or latency, but to buffer pileup.
|
48 |
+
|
49 |
+
|
johnc_plan_2006.txt
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for May 02, 2006
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
Orcs & Elves
|
6 |
+
|
7 |
+
I'm not managing to make regular updates here, but I'll keep this around just in case. I have a bunch of things that I want to talk about -- some thoughts on programming style and reliability, OpenGL, Xbox 360, etc, but we have a timely topic with the release of our second mobile game, Orcs & Elves, that has spurred me into making this update.
|
8 |
+
|
9 |
+
DoomRPG, our (Id Software's and Fountainhead Entertainment's) first mobile title, has been very successful, both in sales and in awards. I predict that the interpolated turn based style of 3D gaming will be widely adopted on the mobile platform, because it plays very naturally on a conventional cell phone. Gaming will be a lot better when there is a mass market of phones that can be played more like a gamepad, but you need to make do with what you actually have.
|
10 |
+
|
11 |
+
One of the interesting things about mobile games is that the sales curve is not at all like the drastically front loaded curve of a PC or console game. DoomRPG is selling better now than when it was initially released, and the numbers are promising for supporting additional development work. However, unless I am pleasantly surprised, the hardware capabilities are going to advance much faster than the market in the next couple years, leading to an unusual situation where you can only afford to develop fairly crude games on incredibly powerful hardware. Perhaps "elegantly simple" would be the better way of looking at it, but it will still wind up being like developing an Xbox title for $500,000. That will wind up being great for many small game companies that just want to explore an idea, but having resource far in excess of your demands does minimize the value of being a hot shot programmer. :-)
|
12 |
+
|
13 |
+
To some degree this is already the case on high end BREW phones today. I have a pretty clear idea what a maxed out software renderer would look like for that class of phones, and it wouldn't be the PlayStation-esq 3D graphics that seems to be the standard direction. When I was doing the graphics engine upgrades for BREW, I started along those lines, but after putting in a couple days at it I realized that I just couldn't afford to spend the time to finish the work. "A clear vision" doesn't mean I can necessarily implement it in a very small integral number of days. I wound up going with a less efficient and less flexible approach that was simple and robust enough to not likely need any more support from me after I handed it over (it didn't).
|
14 |
+
|
15 |
+
During the development of DoomRPG, I had commented that it seemed obvious that it should be followed up with a "traditional, Orcs&Elves sort of fantasy game". A couple people independently commented that "Orcs&Elves" wasn't a bad name for a game so since we didn't run into any obstacles, Orcs& Elves it was. Naming new projects is a lot harder than most people think, because of trademark issues.
|
16 |
+
|
17 |
+
In hindsight, we made a strategic mistake at the start of O&E development. We were fresh off the high end BREW version of DoomRPG, and we all liked developing on BREW a lot better than Java. It isn't that BREW is inherently brilliant, it just avoids the deep sucking nature of java for resource constrained platforms (however, note the above about many mobile games not being resource constrained in the future), and allows you to work inside visual studio. O&E development was started high-end first with the low-end versions done afterwards. I should have known better (Anna was certainly suspicious), because it is always easier to add flashy features without introducing any negatives than it is to chop things out without damaging the core value of a game. The high end version is really wonderful, with all the graphics, sound, and gameplay we aimed for, but when we went to do the low end versions, we found that even after cutting the media as we planned, we were still a long way over the 280k java application limit. Rather than just butchering it, we went for pain, suffering, and schedule slippage, eventually delivering a game that still maintained high quality after the de-scoping (the low end platforms still represent the majority of the market). It would have been much easier to go the other way, but the high end phone users will be happy with our mistake.
|
18 |
+
|
19 |
+
DoomRPG had three base platforms that were customized for different phones -- Java, low end BREW, and high end BREW. O&E added a high end java version that kept most of the quality of the high end BREW version on phones fast enough to support it from carriers willing to allow the larger download. The download size limits are probably the most significant restriction for gaming on the high end phones. I don't really understand why the carriers encourage streaming video traffic, but balk at a couple megs of game media.
|
20 |
+
|
21 |
+
I am really looking forward to the response to Orcs&Elves, because I think it is one of the best product evolutions I have been involved in. The core game play mechanics that were laid out in DoomRPG have proven strong and versatile (again, I bet we have a stable genre here), but now we have a big bag of tricks and a year of polishing the experience behind us, along with a world of some depth. I found it a very good indicator that play testers almost always lost track of time while playing.
|
22 |
+
|
23 |
+
This project was doubly nostalgic for me -- the technology was over a decade old for me, but the content took me back twenty years. All the computer games I wrote in high school were adventure games, and my first two commercial sales were Ultima style games for the Apple II, but Id Software never got around to doing one. Old timers may recall that we were going to do a fantasy game called "The Fight For Justice" (starring a hero called Quake...) after Commander Keen, but Wolfenstein 3D and the birth of the FPS sort of got in the way. :-)
|
24 |
+
|
25 |
+
|
johnc_plan_2007.txt
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for Nov 02, 2007
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
Technology and Games
|
6 |
+
|
7 |
+
Source: http://blogs.ign.com/OrcsandElves/2007/11/02/70591/
|
8 |
+
|
9 |
+
Most of my time lately is spent working on Rage, id Software's Id Tech 5 based game that runs on PCs, Macs, 360s, and PS3s. A modern high-end game is a really breathtaking consumer of computation and storage resources for anyone that has been around computers for any length of time. Our target platforms have at least 512 mb of ram, almost 20 gb of media storage, and many tens of gflops of computation, but the development environment involves an even more massive deployment, with a terabyte of raw data being generated before the final culling and compression is done. It is easy to be a little nonchalant about the continuous pace of improvement with computing, but I still take the time to feel a sense of awe about it all.
|
10 |
+
|
11 |
+
I started programming on a Commodore VIC-20 with 4k of ram and a tape drive, and I remember writing absurdly long lines of basic code to save the couple bytes that a new line number would consume. As the years went by, and my projects moved from the Apple II to the PC and the early consoles, I continued to gain new insights and perspective on different problems, and I often thought that it would be fun to go back to one of the early small systems. Now that I "knew what I was doing", I could do a lot more within the tight constraints than I was ever able to before. I actually carried a bunch of old systems around with me from house to house for many years before I reached the conclusion that I never was going to spend any real time on them again, and finally tossed them out.
|
12 |
+
|
13 |
+
As technology continued to rapidly advance, I saw a lot of good programmers sort of peel off from high end game development, and migrate to other platforms where their existing skill sets were still exactly what was needed. There was a large contingent of hard core assembly language programmers that never wanted to "get soft" with the move to C for more complex game development, and many of them moved from PCs to platforms like the super Nintendo, and eventually into embedded systems or device firmware. There was another contingent that never wanted to move to windows, and so on.
|
14 |
+
|
15 |
+
There is an appeal to working with tighter constraints. The more limited the platform, the closer you can feel you are getting to an "optimal" solution. On a modern big system, there are dozens of ways to accomplish any given task, and it just isn't possible to evaluate all the tradeoffs between the implementations of hundreds of different tasks. On a little system, you have to constrain your design to have a much smaller total number of tasks, and the available options are a lot more reduced. Seeing if something is The Right Thing is a lot easier.
|
16 |
+
|
17 |
+
I probably had my personal "moment of truth" around the beginning of Doom 3's development, when it became clear that it is no longer possible to deeply understand every single part of a modern application. There is just too much. Nevertheless, I found that I could still enjoy my work when confined to a subset of the entire project, and I have thus remained committed to the high end. However, the appeal of smaller systems still lingers.
|
18 |
+
|
19 |
+
A couple years ago, almost on a whim, I got involved in developing games for mobile phones. The primary instigator was when I ran a bunch of games on a new phone and was left frankly appalled at how poor they were. I was thinking to myself "I wrote better games than this each month before we founded Id". I downloaded the SDK for the phone, and started tinkering around a bit. A proof of concept demo and a plan for a game play style specifically tailored for cell phones followed, and we wound up with DoomRPG, and later Orcs & Elves which turned out to be big hits.
|
20 |
+
|
21 |
+
In an ideal world, where I could either stop time or clone myself, I would act as a lead programmer for some smaller projects. In the real world, I can't justify spending much time away from the high-end work, so the low-end work gets done in a few short bursts of engine creation and foundation laying, which is then handed over to the Fountainhead team to actually build a great game. After that, Anna mostly uses me as a threat -- if her programmers tell her that something she really wants in a game can't be done, she threatens to call me up and have me tell them how straightforward the problem really is, which usually fuels them to figure out how to do it on their own.
|
22 |
+
|
23 |
+
Mobile development was fun in a resource constrained design sort of way, but the programming didn't have the "tight" feel that early gaming had, due to the huge variability in the platforms. When I was messing around on my own phone, I spent some time doing java bytecode disassembly and timing, but it was fairly pointless in light of the two hundred or so different phones the game would wind up running on.
|
24 |
+
|
25 |
+
Enter the Nintendo DS.
|
26 |
+
|
27 |
+
|
28 |
+
We had initially looked at possibly moving one of the cell phone titles over to the GBA, but it didn't look like the market would support it, and technically it would have only turned out somewhere in between the low end and high end cell phone versions. With the success of DS, and a suspicion that the players might be a little more open to new third party titles, EA decided that they would support developing a really enhanced version of Orcs&Elves for the DS. This is going a bit out on a limb -- most successful Game Boy / DS titles have been either first-party Nintendo titles, or titles with a strong movie / toy / history tie in. While Orcs & Elves is doing well on mobile, it is still very far from a recognized brand.
|
29 |
+
|
30 |
+
The resource limits on the DS make it almost perfect for a small team development. The hardware is fun to work with, you can directly poke at all the registers, the tool chain works well, and the built in limitations of cartridge memory keeps the design decisions fairly straightforward. Going one step farther up to the PSP with a UMD brings you into the realm of large media sizes that can rapidly consume multi-million dollar development budgets.
|
31 |
+
|
32 |
+
Once the decision was made to go for it, it was my job to figure out what we could reasonably hope to accomplish on the platform, andbring up a first cut at the 3D rendering engine.
|
33 |
+
|
34 |
+
Up next: all the technical details
|
35 |
+
|
36 |
+
|
37 |
+
-----------------------------------------
|
38 |
+
John Carmack's .plan for Nov 08, 2007
|
39 |
+
-----------------------------------------
|
40 |
+
|
41 |
+
DS Technology
|
42 |
+
|
43 |
+
Source: http://blogs.ign.com/OrcsandElves/2007/11/08/71156/
|
44 |
+
|
45 |
+
The actual implementation decisions for Orcs&Elves DS were driven by the hardware, the development timeline, and the budget. With a five person team, we had six months to bring it all together on a new platform. I wrote the code for the hardware accelerated 3D renderer and for the remainder of the project, I was technical advisor.
|
46 |
+
|
47 |
+
The basic compute power of a 32 bit, 66 mhz arm processor and four megs of ram is a pleasant size to work with, basically about what we had on the PC back when the original Doom was written. You are intrinsically limited to a design that is compact enough that you can wrap your head around every aspect of it at once, but you don't wind up mired in crazy size decisions like trading a couple functions of code for an extra graphical icon.
|
48 |
+
|
49 |
+
Going back to fixed point math is always a chore, and while the DS has hardware acceleration for fixed point math, it doesn't automatically integrate with C/C++ code. The compiler / linker / debugger tool chain worked just fine, and I never felt that I was fighting the development environment like it used to be with the really early consoles. The DS SDK takes my preferred approach of both documenting the hardware fully and providing a helper library with full source code that you can pull apart as necessary.
|
50 |
+
|
51 |
+
The baseline spec we started with was a 16 meg cart for the game. I was sure we could get all the features we were discussing to fit in there, but in hindsight, we really should have pushed for a 32 meg cart, because it would have allowed us to add a lot more high quality 2D art to the game, and include some pre-rendered cinematics to help set the mood and tell the story. Anna had pushed for this from the beginning, but I was worried that we wouldn't have enough time to create the additional media, and I didn't want to eat the extra manufacturing costs on a speculative game release of an unknown IP.
|
52 |
+
|
53 |
+
At first glance, a 16 meg cart is over eight times as large as our high end cell phone distribution, but that turns out to be misleading. Everything is highly compressed on the mobile versions, but because of the need to directly DMA many assets in a usable form on the DS, and sometimes due to the tighter ram limits, a lot of the media takes up more space on the DS. The game is still a lot bigger, but not 8x.
|
54 |
+
|
55 |
+
Interfacing with the DS 3D graphics processor was my major contribution to the project. The DS is an unusual graphics machine, with no direct analog before it, but the individual pieces were close enough to things I had experience with that I was able to be effective pretty quickly. While there are a few things I wish would have been done differently, I still found it a lot of fun to work with.
|
56 |
+
|
57 |
+
|
58 |
+
The geometry engine is nice and clean, implementing a good subset of the basic OpenGL pipe in fixed point. It is also quite fast relative to the rest of the system. I had originally laid out the code to double buffer all the command traffic so that the geometry engine could completely overlap with the CPU, but it turned out that we would run into the polygon limits on the rasterizer before the geometry engine worked up much of a sweat, so I just saved memory and let the geometry engine run with a single command buffer. The game logic tended to take more time to process than the geometry engine, so it was almost never a gating factor. A title that heavily used vertex lighting and matrix stack operations might be able to load up the geometry pipeline a bit, but with just texture and color at the verts, it has margin.
|
59 |
+
|
60 |
+
Unlike classic OpenGL, you can change the matrix in the middle of specifying a primitive, allowing single bone skinned model rendering to be performed with a conventional vertex pipeline. This was a novel approach that I hadn't seen anywhere else. It was a little odd to see an explicitly separate model and projection matrix in hardware, since they are usually pre-multiplied into a single MVP matrix, but it allows the slow main cpu to avoid having to do some matrix math.
|
61 |
+
|
62 |
+
The question of 3D models versus sprites was one of the most significant decisions for Orcs&Elves. There are several situations in the game where you may be facing six or eight monsters, and I was concerned about how low poly the enemies would have to be to avoid problems with the rasterizer, especially when you consider that monsters can chase you into scenes that may tax the rasterizer all by themselves. Coupled with the fact that developing a new skeletal animation system, building all new models, and animating them would almost certainly have busted our development timeline, in the given circumstances, we decided sprites would do a better job.
|
63 |
+
|
64 |
+
The rasterization side of things on the DS is... quirky. Almost all 3D rendering systems today use a frame buffer and a depth buffer that is stored in dedicated graphics memory and incrementally updated by the graphics processor. The DS essentially renders as it is being displayed, with only a small set of lines acting as a buffer. This saves memory, and can give speed and power savings, but it has some significant tradeoffs.
|
65 |
+
|
66 |
+
Failure modes are bad -- If you overload the polygon list, the remaining polygons just don't show up. This can be mediated by drawing the nearby and important things first, so if anything disappears it is hopefully in the distance where it isn't very noticeable. If you overload the fill rate, horizontal bars appear across the screen. In a perspective view, the greatest complexity and density tend to be in the middle of the screen, so if your scene gets too bad, a jittering color bar tends to appear in the center.
|
67 |
+
|
68 |
+
In a conventional rendering engine, overloading complexity just tends to make the game go slower. On the DS, overloading looks broken. This means that you need to leave yourself a lot more margin, and therefore use the hardware less aggressively. The plus side is that since the hardware is re-drawing the screen at 60hz no matter what you do, you are strongly encouraged to make sure the rest of your game also stays at 60hz.
|
69 |
+
|
70 |
+
|
71 |
+
The lack of texture filtering on the DS is the most obvious difference with other current platforms, and it does strongly encourages a cartoony art style for games. The art style for O&E isn't really ideal, and perhaps we should have stylized things a bit more.
|
72 |
+
|
73 |
+
You don't have a lot of texture memory available on the DS, less even than an original Playstation. In O&E, the environment graphics (walls and floors) are allocated statically at map load time, and must be balanced by the level artist. The character sprites are dynamically managed, with high resolution artwork for the monster directly in front of the player being swapped in every frame. The field of view and sprite positioning are carefully managed, along with a "pushback factor", to ensure that the sprites are at 1:1 scale when they are one tile away, and 2:1 scale when they are right in front of the player. Without bilinear filtering on the texturing, non-integral scales wind up looking very ugly. This is one of the advantages of the tile based play -- if it was completely free form, the monsters would look a lot uglier. There isn't any intermediate step to be taken to improve the monster rendering without using a full 4x the memory to render an adjacent monster at a 1:1 scale. Even if we had more memory, I probably would have spent it on more animations instead of higher resolution.
|
74 |
+
|
75 |
+
The most disappointing mistake in the DS hardware is the lack of even the basic original blending modes from OpenGL. This was a mistake that was common in the very first generation of PC 3D accelerators, where so many companies just assumed that "blending" meant "alpha blending", and they didn't include support for add, modulate, and the other standard blending modes. No relevant company had made that mistake in a decade, and Nintendo's consoles have always done it right since the U64, so it was a surprise to see it in the DS. Additive blending in particular is crucial to most "3D flash" sorts of rendering, and various non-additive modulation modes are used for light mapping and other core rendering features.
|
76 |
+
|
77 |
+
I only got to spend four days actually writing all the 3D code for Orcs&Elves, so there are lots of potential directions that I am interested in exploring in the future. We plan to have two more DS projects in development next year, which I hope will let me try out a skeletal animation system, experiment with the networking hardware, and implement a more flexible high level culling algorithm than what I used in O&E.
|
78 |
+
|
79 |
+
John Carmack
|
80 |
+
|
81 |
+
|
johnc_plan_2009.txt
ADDED
@@ -0,0 +1,288 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for Mar 26, 2009
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
iPhone development: Wolfenstein 3D Classic
|
6 |
+
|
7 |
+
Source: http://www.idsoftware.com/wolfenstein3dclassic/wolfdevelopment.htm
|
8 |
+
|
9 |
+
I had been frustrated for over a year with the fact that we didn't have any iPhone development projects going internally at Id. I love my iPhone, and I think the App Store is an extremely important model for the software business. Unfortunately, things have conspired against us being out early on the platform.
|
10 |
+
|
11 |
+
Robert Duffy and I spent a week early on starting to bring up the Orcs & Elves DS codebase on the iPhone, which would have been a nice project for a launch title, but it wasn't going to be a slam dunk. The iPhone graphics hardware is a more capable superset of the DS hardware (the driver overhead is far, far worse, though), but the codebase was fairly DS specific, with lots of Nintendo API calls all over the place. I got the basics drawing by converting things to OpenGL ES, but I was still on the fence as to whether the best approach to get all the picky little special effects working would be a complete GL conversion, or a DS graphics library emulation layer. Coupled with the fact that the entire user interface would need to be re-thought and re-tested, it was clear that the project would take several months of development time, and need artists and designers as well as coding work. I made the pitch that this would still be a good plan, but the idMobile team was already committed to the Wolfenstein RPG project for conventional Java and BREW mobile phones, and Anna didn't want to slip a scheduled milestone on the established, successful development directions there for a speculative iPhone project.
|
12 |
+
|
13 |
+
After thinking about the platform's capabilities a bit more, I had a plan for an aggressive, iPhone specific project that we actually started putting some internal resources on, but the programmer tasked with it didn't work out and was let go. In an odd coincidence, an outside development team came to us with a proposal for a similar project on the Wii, and we decided to have them work on the iPhone project with us instead. We should be announcing this project soon, and it is cool. It is also late, but that's software development...
|
14 |
+
|
15 |
+
Late last year, the mobile team had finished up all the planned versions of Wolfenstein RPG, but EA had suggested that in addition to the hundreds of customized versions they normally produce for all the various mobile phones, they were interested in having another team do a significant media quality improvement on it for the iPhone. While Wolf RPG is a very finely crafted product for traditional cell phones, it wasn't designed for the iPhone's interface or capabilities, so it wouldn't be an ideal project, but it should still be worth doing. When we got the first build to test, I was pleased with how the high res artwork looked, but I was appalled at how slow it ran. It felt like one of the mid range java versions, not better than the high end BREW as I expected. I started to get a sinking feeling. I searched around in the level for a view that would confirm my suspicion, and when I found a clear enough view of some angled geometry I saw the tell-tale mid-polygon affine swim in the texture as I rotated. They were using the software rasterizer on the iPhone. I patted myself on the back a bit for the fact that the combination of my updated mobile renderer, the intelligent level design / restricted movement, and the hi-res artwork made the software renderer almost visually indistinguishable from a hardware renderer, but I was very unhappy about the implementation.
|
16 |
+
|
17 |
+
I told EA that we were NOT going to ship that as the first Id Software product on the iPhone. Using the iPhone's hardware 3D acceleration was a requirement, and it should be easy -- when I did the second generation mobile renderer (written originally in java) it was layered on top of a class I named TinyGL that did the transform / clip / rasterize operations fairly close to OpenGL semantics, but in fixed point and with both horizontal and vertical rasterization options for perspective correction. The developers came back and said it would take two months and exceed their budget.
|
18 |
+
|
19 |
+
Rather than having a big confrontation over the issue, I told them to just send the project to me and I would do it myself. Cass Everitt had been doing some personal work on the iPhone, so he helped me get everything set up for local iPhone development here, which is a lot more tortuous than you would expect from an Apple product. As usual, my off the cuff estimate of "Two days!" was optimistic, but I did get it done in four, and the game is definitely more pleasant at 8x the frame rate.
|
20 |
+
|
21 |
+
And I had fun doing it.
|
22 |
+
|
23 |
+
Since we now were doing something resembling "real work" on the iPhone at the office, we kept it going at a low priority. One of the projects Cass was tinkering around with at home was a port of Quake 3, and we talked about different interface strategies every now and then.
|
24 |
+
|
25 |
+
Unfortunately, when we sat down to try a few things out, we found that Q3 wasn't really running fast enough to make good judgments on iPhone control systems. The hardware should be capable enough, but it will take some architectural changes to the rendering code to get the most out of it.
|
26 |
+
|
27 |
+
I was just starting to set up a framework to significantly revise Q3 when I considered the possibility of just going to an earlier codebase to experiment with initially. If we wanted to factor performance out of the equation, we could go all the way back to Wolfenstein 3D, the grandfather of FPS games. It had the basic run and gun play that has been built on for fifteen years, but it originally ran on 286 computers, so it should be pretty trivial to hold a good framerate on the iPhone.
|
28 |
+
|
29 |
+
Wolfenstein was originally written in Borland C and TASM for DOS, but I had open sourced the code long ago, and there were several projects that had updated the original code to work on OpenGL and modern operating systems. After a little looking around, I found Wolf3D Redux at http://wolf3dredux.sourceforge.net/. One of the development comments about "removal of the gangrenous 16 bit code" made me smile.
|
30 |
+
|
31 |
+
It was nice and simple to download, extract data from a commercial copy of Wolfenstein, and start playing on a PC at high resolution. Things weren't as smooth as they should be at first, but two little changes made a huge difference -- going at VBL synced update rates with one tic per cycle instead of counting milliseconds to match 70 hz game tics, and fixing a bug with premature integralization in the angle update code that caused mouse movement to be notchier than it should be. The game was still fun to play after all these years, and I began to think that it might be worthwhile to actually make a product out of Wolfenstein on the iPhone, rather than just using it as a testbed, assuming the controls worked out as fun to play. The simple episodic nature of the game would make it easy to split up into a $0.99 version with just the first episode, a more expensive version with all sixty levels, and we could release Spear of Destiny if there was additional demand. I was getting a little ahead of myself without a fun-to-play demonstration of feasibility on the iPhone, but the idea of moving the entire line of classic Id titles over -- Wolf, Doom, Quake, Quake 2, and Quake Arena, was starting to sound like a real good idea.
|
32 |
+
|
33 |
+
I sent an email to the Wolf 3D Redux project maintainer to see if he might be interested in working on an iPhone project with us, but it had been over a year since the last update, and he must have moved on to other things. I thought about it a bit, and decided that I would go ahead and do the project myself. The "big projects" at Id are always top priority, but the systems programming work in Rage is largely completed, and the team hasn't been gated on me for anything in a while. There is going to be memory and framerate optimization work going on until it ships, but I decided that I could spend a couple weeks away from Rage to work on the iPhone exclusively. Cass continued to help with iPhone system issues, I drafted Eric Will to create the few new art assets, and Christian Antkow did the audio work, but this was the first time I had taken full responsibility for an entire product in a very long time.
|
34 |
+
|
35 |
+
@Design notes@
|
36 |
+
|
37 |
+
The big question was how "classic" should we leave the game? I have bought various incarnations of Super Mario Bros on at least four Nintendo platforms, so I think there is something to be said for the classics, but there were so many options for improvement. The walls and sprites in the game were originally all 64 x 64 x 8 bit color, and the sound effects were either 8khz / 8 bit mono or (sometimes truly awful) FM synth sounds. Changing these would be trivial from a coding standpoint. In the end, I decided to leave the game media pretty much unchanged, but tweak the game play a little bit, and build a new user framework around the core play experience. This decision was made a lot easier by the fact that we were right around the 10 meg over-the-air app download limit with the converted media. This would probably be the only Id project to ever be within hailing distance of that mark, so we should try to fit it in.
|
38 |
+
|
39 |
+
The original in-game status bar display had to go, because the user's thumbs were expected to cover much of that area. We could have gone with just floating stats, but I thought that BJ's face added a lot of personality to the game, so I wanted to leave that in the middle of the screen. Unfortunately, the way the weapon graphics were drawn, especially the knife, caused issues if they were just drawn above the existing face graphics. I had a wider background created for the face, and used the extra space for directional damage indicators, which was a nice improvement in the gameplay. It was a tough decision to stop there on damage feedback, because a lot of little things with view roll kicks, shaped screen blends, and even double vision or blurring effects, are all pretty easy to add and quite effective, but getting farther away from "classic".
|
40 |
+
|
41 |
+
I started out with an explicit "open door" button like the original game, but I quickly decided to just make that automatic. Wolf and Doom had explicit "use" buttons, but we did away with them on Quake with contact or proximity activation on everything. Modern games have generally brought explicit activation back by situationally overriding attack, but hunting for push walls in Wolf by shooting every tile wouldn't work out. There were some combat tactics involving explicitly shutting doors that are gone with automatic-use, and some secret push walls are trivially found when you pick up an item in front of them now, but this was definitely the right decision.
|
42 |
+
|
43 |
+
You could switch weapons in Wolf, but almost nobody actually did, except for occasionally conserving ammo with the chain gun, or challenges like "beat the game with only the knife". That functionality didn't justify the interface clutter.
|
44 |
+
|
45 |
+
The concept of "lives" was still in wolf, with 1-ups and extras at certain scores. We ditched that in Doom, which was actually sort of innovative at the time, since action games on computers and consoles were still very much take-the-quarter arcade oriented. I miss the concept of "score" in a lot of games today, but I think the finite and granular nature of the enemies, tasks, and items in Wolf is better suited to end-of-level stats, so I removed both lives and score, but added persistent awards for par time, 100% kills, 100% secrets, and 100% treasures. The award alone wasn't enough incentive to make treasures relevant, so I turned them into uncapped +1 health crumbs, which makes you always happy to find them.
|
46 |
+
|
47 |
+
I increased the pickup radius for items, which avoided the mild frustration of having to sometimes make a couple passes at an item when you are cleaning up a room full of stuff.
|
48 |
+
|
49 |
+
I doubled the starting ammo on a fresh level start. If a player just got killed, it isn't good to frustrate them even more with a severe ammo conservation constraint. There was some debate about the right way to handle death: respawn with the level as is (good in that you can keep making progress if you just get one more shot off each time, bad in that weapon pickups are no longer available), respawn just as you entered the level (good -- keep your machinegun / chaingun, bad -- you might have 1 health), or, what I chose, restart the map with basic stats just as if you had started the map from the menu.
|
50 |
+
|
51 |
+
There are 60 levels in the original Wolf dataset, and I wanted people to have the freedom to easily jump around between different levels and skills, so there is no enforcement of starting at the beginning. The challenge is to /complete /a level, not /get to/ a level. It is fun to start filling in the grid of level completions and awards, and it often feels better to try a different level after a death. The only exception to the start-anywhere option is that you must find the entrance to the secret levels before you can start a new game there.
|
52 |
+
|
53 |
+
In watching the early testers, the biggest issue I saw was people sliding off doors before they opened, and having to maneuver back around to go through. In Wolf, as far as collision detection was concerned, everything was just a 64x64 tile map that was either solid or passable.
|
54 |
+
|
55 |
+
Doors changed the tile state when they completed opening or began closing. There was discussion about magnetizing the view angle towards doors, or somehow beveling the areas around the doors, but it turned out to be pretty easy to make the door tiles only have a solid central core against the player, so players would slide into the "notch" with the door until it opened. This made a huge improvement in playability.
|
56 |
+
|
57 |
+
There is definitely something to be said for a game that loads in a few seconds, with automatic save of your position when you exit. I did a lot of testing by playing the game, exiting to take notes in the iPhone notepad, then restarting Wolf to resume playing. Not having to skip through animated logos at the start is nice. We got this pretty much by accident with the very small and simple nature of Wolf, but I think it is worth specifically optimizing for in future titles.
|
58 |
+
|
59 |
+
The original point of this project was to investigate FPS control schemes for the iPhone, and a lot of testing was done with different schemes and parameters. I was sort of hoping that there would be one "obviously correct" way to control it, but it doesn't turn out to be the case.
|
60 |
+
|
61 |
+
For a casual first time player, it is clearly best to have a single forward / back / turn control stick and a fire button.
|
62 |
+
|
63 |
+
Tilt control is confusing for first exposure to the game, but I think it does add to the fun factor when you use it. I like the tilt-to-move option, but people that play a lot of driving games on the iPhone seem to like tilt-to-turn, where you are sort of driving BJ through the levels. Tilt needs a decent deadband, and a little bit of filtering is good. I was surprised that the precision on the accelerometer was only a couple degrees, which makes it poorly suited for any direct mapped usage, but it works well enough as a relative speed control.
|
64 |
+
|
65 |
+
Serious console gamers tend to take to the "dual stick" control modes easily for movement, but the placement of the fire button is problematic. Using an index finger to fire is effective but uncomfortable. I see many players just move the thumb to fire, using strafe movement for fine tuning aim. It is almost tempting to try to hijack the side volume switch for fire, but the ergonomics aren't quite right, and it would be very un-Apple-like, and wouldn't be available on the iPod touch (plus I couldn't figure out how...).
|
66 |
+
|
67 |
+
We tried a tilt-forward to fire to allow you to keep your thumbs on the dual control sticks, but it didn't work out very well. Forward / back tilt has the inherent variable holding angle problem for anything, and a binary transition point is hard for people to hold without continuous feedback. Better visual feedback on the current angle and trip point would help, but we didn't pursue it much. For a game with just, say, a rocket launcher, shake/shove-to-fire might be interesting, but it isn't any good for wolf.
|
68 |
+
|
69 |
+
It was critical for the control sticks to be analog, since digital direction pads have proven quite ineffective on touch screens due to progressive lack of registration during play. With an analog stick, the player has continuous visual feedback of the stick position in most cases, so they can self correct. Tuning the deadband and slide off behavior are important.
|
70 |
+
|
71 |
+
Level design criteria has advanced a lot since Wolfenstein, but I wasn't going to open up the option of us modifying the levels, even though the start of the first level is painfully bad for a first time player, with the tiny, symmetric rooms for them to get their nose mashed into walls and turned around in. The idea is that you started the game in a prison cell after bashing your guard over the head, but even with the exact same game tools, we would lead the player through the experience much better now. Some of the levels are still great fun to play, and it is interesting to read Tom Hall and John Romero's designer notes in the old hint manuals, but the truth is that some levels were scrubbed out in only a couple hours, unlike the long process of testing and adjustment that goes on today.
|
72 |
+
|
73 |
+
It was only after I thought I was basically done with the game that Tim Willits pointed out the elephant in the gameplay room -- for 95% of players, wandering around lost in a maze isn't very much fun.
|
74 |
+
|
75 |
+
Implementing an automap was pretty straightforward, and it probably added more to the enjoyment of the game than anything else. Before adding this, I thought that only a truly negligible amount of people would actually finish all 60 levels, but now I think there might be enough people that get through them to justify bringing the Spear of Destiny levels over later.
|
76 |
+
|
77 |
+
When I was first thinking about the project I sort of assumed that we wouldn't bother with music, but Wolf3D Redux already had code that converted the old id music format into ogg, so we would up with support at the beginning, and it turned out pretty good. We wound up ripping the red book audio tracks from one of the later commercial Wolf releases and encoding at a different bitrate, but I probably wouldn't have bothered if not for the initial support. It would have been nice to re-record the music with a high quality MIDI synth, but we didn't have the original MIDI source, and Christian said that the conversion back from the id music format to midi was a little spotty, and would take a fair amount of work to get right. I emailed Bobby Prince, the original composer, to see if he had any high quality versions still around, but he didn't get back with me.
|
78 |
+
|
79 |
+
The game is definitely simplistic by modern standards, but it still has its moments. Getting the drop on a brown shirt just as he is pulling his pistol from the holster. Making an SS do the "twitchy dance" with your machine gun. Rounding a corner and unloading your weapon on ... a potted plant. Simplistic plays well on the iPhone.
|
80 |
+
|
81 |
+
@Programming notes@
|
82 |
+
|
83 |
+
Cass and I got the game running on the iPhone very quickly, but I was a little disappointed that various issues around the graphics driver, the input processing, and the process scheduling meant that doing a locked-at-60-hz game on the iPhone wasn't really possible. I hope to take these up with Apple at some point in the future, but it meant that Wolf would be a roughly two tick game. It is only "roughly" because there is no swapinterval support, and the timer scheduling has a lot of variability in it. It doesn't seem to matter all that much, the play is still smooth and fun, but I would have liked to at least contrast it with the perfect limit case.
|
84 |
+
|
85 |
+
It turns out that there were a couple issues that required work even at 30hz. For a game like Wolf, any PC that is in use today is essentially infinitely fast, and the Wolf3D Redux code did some things that were convenient but wasteful. That is often exactly the right thing to do, but the iPhone isn't quite as infinitely fast as a desktop PC.
|
86 |
+
|
87 |
+
Wolfenstein (and Doom) originally drew the characters as sparse stretched columns of solid pixels (vertical instead of horizontal for efficiency in interleaved planar mode-X VGA), but OpenGL versions need to generate a square texture with transparent pixels. Typically this is then drawn by either alpha blending or alpha testing a big quad that is mostly empty space. You could play through several early levels of Wolf without this being a problem, but in later levels there are often large fields of dozens of items that stack up to enough overdraw to max out the GPU and drop the framerate to 20 fps. The solution is to bound the solid pixels in the texture and only draw that restricted area, which solves the problem with most items, but Wolf has a few different heavily used ceiling lamp textures that have a small lamp at the top and a thin but full width shadow at the bottom. A single bounds doesn't exclude many texels, so I wound up including two bounds, which made them render many times faster.
|
88 |
+
|
89 |
+
The other problem was CPU related. Wolf3d Redux used the original ray casting scheme to find out which walls were visible, then called a routine to draw each wall tile with OpenGL calls. The code looked something like this:
|
90 |
+
|
91 |
+
[code]DrawWall( int wallNum ) {
|
92 |
+
char name[128];
|
93 |
+
texture_t *tex;
|
94 |
+
sprintf( name, "walls/%d.tga", wallNum );
|
95 |
+
tex = FindTexture( name );
|
96 |
+
...
|
97 |
+
}
|
98 |
+
texture_t FindTexture( const char *name ) {
|
99 |
+
int i;
|
100 |
+
for ( i = 0 ; i < numTextures ; i++ ) {
|
101 |
+
if ( !strcmp( name, texture[name]->name ) ) {
|
102 |
+
return texture[name];
|
103 |
+
}
|
104 |
+
}
|
105 |
+
...
|
106 |
+
} [/code]
|
107 |
+
|
108 |
+
I winced when I saw that at the top of the instruments profile, but again, you could play all the early levels that only had twenty or thirty visible tiles at a time without it actually being a problem. However, some later levels with huge open areas could have over a hundred visible tiles, and that led to 20hz again. The solution was a trivial change to something resembling:
|
109 |
+
|
110 |
+
[code]DrawWall( int wallNum ) {
|
111 |
+
texture_t *tex = wallTextures[wallNum];
|
112 |
+
...
|
113 |
+
} [/code]
|
114 |
+
|
115 |
+
Wolf3D Redux included a utility that extracted the variously packed media from the original games and turned them into cleaner files with modern formats. Unfortunately, an attempt at increasing the quality of the original art assets by using hq2x graphics scaling to turn the 64x64 art into better filtered 128x128 arts was causing lots of sprites to have fringes around them due to incorrect handling of alpha borders. It wasn't possible to fix it up at load time, so I had to do the proper outline-with-color-but-0-alpha operations in a modified version of the extractor. I also decided to do all the format conversion and mip generation there, so there was no significant CPU time spent during texture loading, helping to keep the load time down. I experimented with the PVRTC formats, but while it would have been ok for the walls, unlike with DXT you can't get a lossless alpha mask out of it, so it wouldn't have worked for the sprites. Besides, you really don't want to mess with the carefully chosen pixels in a 64x64 block very much when you scale it larger than the screen on occasion.
|
116 |
+
|
117 |
+
I also had to make one last minute hack change to the original media -- the Red Cross organization had asserted their trademark rights over red crosses (sigh) some time after we released the original Wolfenstein 3D game, and all new game releases must not use red crosses on white backgrounds as health symbols. One single, solitary sprite graphic got modified for this release.
|
118 |
+
|
119 |
+
User interface code was the first thing I started making other programmers do at Id when I no longer had to write every line of code in a project, because I usually find it tedious and unrewarding. This was such a small project that I went ahead and did it myself, and I learned an interesting little thing. Traditionally, UI code has separate drawing and input processing code, but on a touchscreen device, it often works well to do a combined "immediate mode interface", with code like this:
|
120 |
+
|
121 |
+
[code]if ( DrawPicWithTouch( x, y, w, h, name ) ) {
|
122 |
+
menuState = newState;
|
123 |
+
} [/code]
|
124 |
+
|
125 |
+
Doing that for the floating user gameplay input controls would introduce a frame of response latency, but for menus and such, it works very well.
|
126 |
+
|
127 |
+
One of the worst moments during the development was when I was getting ready to hook up the automatic savegame on app exit. There wasn't any savegame code. I went back and grabbed the original 16 bit dos code for load / save game, but when I compiled I found out that the Wolf3d Redux codebase had changed a lot more than just the near / far pointer issues, asm code, and comment blocks. The changes were sensible things, like grouping more variables into structures and defining enums for more things, but it did mean that I wasn't dealing with the commercially tested core that I thought I was. It also meant that I was a lot more concerned about a strange enemy lerping through the world bug I had seen a couple times.
|
128 |
+
|
129 |
+
I seriously considered going back to the virgin codebase and reimplementing the OpenGL rendering from scratch. The other thing that bothered me about the Redux codebase was that it was basically a graft of the Wolf3D code into the middle of a gutted Quake 2 codebase. This was cool in some ways, because it gave us a console, cvars, and the system / OpenGL portable framework, and it was clear the original intention was to move towards multiplayer functionality, but it was a lot of bloat. The original wolf code was only a few dozen C files, while the framework around it here was several times that.
|
130 |
+
|
131 |
+
Looking through the original code brought back some memories. I stopped signing code files years ago, but the top of WL_MAIN.C made me smile:
|
132 |
+
|
133 |
+
[code]/*
|
134 |
+
=============================================================================
|
135 |
+
|
136 |
+
WOLFENSTEIN 3-D
|
137 |
+
|
138 |
+
An Id Software production
|
139 |
+
|
140 |
+
by John Carmack
|
141 |
+
|
142 |
+
=============================================================================
|
143 |
+
*/ [/code]
|
144 |
+
|
145 |
+
It wasn't dated, but that would have been in 1991.
|
146 |
+
|
147 |
+
In the end, I decided to stick with the Redux codebase, but I got a lot more free with hacking big chunks of it out. I reimplemented load / save game (fixing the inevitable pointer bugs involved), and by littering asserts throughout the code, I tracked the other problem down to an issue with making a signed comparison against one of the new enum types that compare as unsigned. I'm still not positive if this was the right call, since the codebase is sort of a mess with lots of vestigial code that doesn't really do anything, and I don't have time to clean it all up right now.
|
148 |
+
|
149 |
+
Of course, someone else is welcome to do that. The full source code for the commercial app is available on the web site. There was a little thought given to the fact that if I had reverted to the virgin source, the project wouldn't be required to be under the GPL. Wolf and the app store presents a sort of unique situation -- a user can't just compile the code and choose not to pay for the app, because most users aren't registered developers, and the data isn't readily available, but there is actually some level of commercial risk in the fast-moving iPhone development community. It will not be hard to take the code that is already fun to play, pull a bunch of fun things off the net out of various projects people have done with the code over the years, dust off some old map editors, and load up with some modern quality art and sound.
|
150 |
+
|
151 |
+
Everyone is perfectly within their rights to go do that, and they can aggressively try to bury the original game if they want. However, I think there is actually a pretty good opportunity for cooperation. If anyone makes a quality product and links to the original Wolf app, we can start having links to "wolf derived" or "wolf related" projects.
|
152 |
+
|
153 |
+
That should turn out to be a win for everyone.
|
154 |
+
|
155 |
+
I'm going back to Rage for a while, but I do expect Classic Doom to come fairly soon for the iPhone.
|
156 |
+
|
157 |
+
|
158 |
+
|
159 |
+
-----------------------------------------
|
160 |
+
John Carmack's .plan for May 27, 2009
|
161 |
+
-----------------------------------------
|
162 |
+
|
163 |
+
iPhone development: Doom Classic Progress Report
|
164 |
+
|
165 |
+
Source: http://www.idsoftware.com/iphone-doom-classic-progress/
|
166 |
+
|
167 |
+
I have been spending the majority of my time working on iPhone Doom Classic for several weeks now, and the first beta build went out to some external testers a couple days ago. I am moving back on to Rage for a while, but I expect to be able to finish it up for submission to the App Store next month.
|
168 |
+
|
169 |
+
Wolfenstein 3D Classic was a quickie project to satisfy my curiosity and test the iPhone waters, but Doom is a more serious effort. In addition to the millions of people with fond memories of the game, there is still an active gaming / development community surrounding the original Doom, and I don't want to disappoint any of them.
|
170 |
+
|
171 |
+
One of the things I love about open sourcing the old games is that Doom has been ported to practically everything with a 32 bit processor, from toasters to supercomputers. We hear from a lot of companies that have moved the old games onto various set top boxes and PDAs, and want licenses to sell them. We generally come to some terms in the five figure range for obscure platforms, but it is always with a bit of a sigh. The game runs, and the demo playbacks look good, but there is a distinct lack of actually caring about the game play itself. Making Doom run on a new platform is only a couple days of work. Making it a really good game on a platform that doesn't have a keyboard and mouse or an excess of processing power is an honest development effort.
|
172 |
+
|
173 |
+
To my surprise, Christian was able to dig up the original high quality source material for the Doom sounds, so we have 22khz 16 bit sound effects instead of the 11khz 8 bit ones from the original game. It turns out that I can barely tell the difference, which is a sign that we made good choices way back then about catering the sounds to the output device. If we were on the fence for any resource limits, I would have considered sticking with the originals, but the current OpenAL mixer code has errors with 8 bit source buffers, so I would have had to convert to 16 bit at load time anyway, and just referencing the high quality source media actually speeds up the load times.
|
174 |
+
|
175 |
+
The music is all stored as mp3, performed on a high quality synthesizer. For Wolf, we used ogg, because that's what was in the Redux codebase that I started with, but I don't have all that CPU performance margin anymore, so it was necessary to use the iPhone's audio decompression hardware through the AudioQueue services. The music is the largest part of the application, but everything else is still well over the 10 meg cellular app transfer limit, so I'm not tempted to try and squeeze it under like we did with Wolfenstein. Maybe being able to get an app over 3G really isn't very important to its success. The fact that people are downloading Myst on the iPhone is heartening -- I have ideas for leveraging our high end idTech-5 content creation pipeline for a future iPhone game, if people will go for a few hundred meg download.
|
176 |
+
|
177 |
+
The toughest question was the artwork. Since Wolf was selling well, I had planned on paying contractors to upscale all the Doom graphics to twice the original resolution. When I pulled all the graphics out and tallied it all up, it looked a lot more marginal than I had expected. There were over two thousand individual pieces of art, and it was over ten megatexels in exactly bounded area, let alone atlas fit or power of two inset. The PVRTC compressed formats would work great for the floors and ceilings, which are all nice 64x64 blocks, but it has issues for both the walls and floors.
|
178 |
+
|
179 |
+
PVRTC textures must be power of two and, notably, square. If you want a 256 x 512 texture that needs to repeat in both axis, you need to resample it to 512 x 512 to use PVRTC, which means you lose half your compression benefit and get distorted filter kernels and mip level selections. Even worse, Doom had the concept of composited walls, where a surface was generated by adding, say, a switch texture on top of a wall texture. You can't do that sort of operation with PVRTC images. The clean sheet of paper solution to both problems is to design around textures that the hardware likes and use more geometry where you need to tile or combine them, but Doom wasn't well suited to that approach.
|
180 |
+
|
181 |
+
Character sprites don't get repeated, so a lot of them can be packed into a nice square 1024 x 1024 texture to minimize waste, but the PVRTC texture formats aren't well suited to sprite graphics. The DXT1 compression format has an exact bit mask for the alpha channel, which is what you want for an application like this. PVRTC treats alpha like any other color channel, so you get coupling between the alpha and color channels that results in partially transparent pixels ringing outside the desired character boundary. It works fine for things like clouds or fireballs, but not so good for character sprites. It looks like it should be possible to get an exact binary mask with the 2 bit PVRTC mode, which could be combined with a 4 bit PVRTC color texture to get a 6 bpp perfectly outlined sprite, but the multitexture performance on the iPhone, even with PVRTC textures, is not fast enough to prevent missing 30 fps when you have a hoard of monsters in front of you.
|
182 |
+
|
183 |
+
We started to do some internal samples of up-scaled artwork to use as reference for getting the contractor quotes, and it just wasn't looking all that spectacular. Doubling the art and smoothing out the edges wasn't buying much. There was certainly a lot of room for improvement, since Doom was designed around a 256 color palette with a limited selection of intensity ramps for lighting, but moving there from the starting point would be tricky. If I went to one of our artists today and asked them to draw a bad-ass Baron of Hell throwing a fireball in a 256 x 256 x 16 bit block, I would get something a LOT better than the original art, but it would look different, not just better.
|
184 |
+
|
185 |
+
I was also a little taken aback by some of the backlash against the updated graphics that I put in for Wolf 1.1. I took the walls, guns, and decorative sprites from the Mac version of Wolfenstein, and had Eric use that as source to recreate some similar graphics that weren't present in the Mac version. After release, there were a number of reviews that complained, saying that it "ruined the classic feel". I have a couple thoughts abut this: Changing the look on a point release is going to cause some level of complaint, so it is probably a good idea to make any changes from " classic" you think you might want in version 1.0. I also believe most of the complaints were due to the view weapons. The original gun artwork wasn't great, but the double-res ones weren't very good either, and they were a bit different looking. I debated with myself a bit about using them at all, and it looks like I probably shouldn't have. I can't see any drawback whatsoever to the double res walls and sprites, since they are in the same style, just better looking when you jam your face up against them.
|
186 |
+
|
187 |
+
In the end, I decided not to do anything with the DOOM source art. With the GPU accelerated filtering and 24 bit lighting it looks a lot better than it ever did, and with floors, ceilings, and lighting you don't seem to notice the low resolution as much as with Wolf.
|
188 |
+
|
189 |
+
With the speed (a solid 30 fps, even in the more aggressive later levels), the audio, the resolution, and the rendering quality, it is Doom as you remember it, which is quite a bit better than it actually was. If you go back and play the original game on a 386 with a sound blaster, you will be surprised at the 15 fps, FM-synth music, "bathroom tile sized" 320 x 200 pixels, external network game setup utility, and external keyboard configuration. A lot of people remember it as "The best game EVER!", but "ever" has sure moved a lot in the last decade!
|
190 |
+
|
191 |
+
Before I actually started coding on the project, I had visions of adding a lot of modern tuned effects to the core gameplay experience. It would certainly stay a sprite-and-sector based game, but there are many things that would be done differently with the benefit of a GPU and the wisdom of hindsight. Once I began actually working on it, it started to look like a bad idea for a number of reasons. I am trying to not be very disruptive in the main codebase, because I want it to stay a part of http://prboom.sourceforge.net/ instead of being another codebase fork. While I can certainly add a bunch of new features fairly quickly, iterating through a lot of user testing and checking for problems across the >100 commercial Doom levels would take a lot longer. There really is value in " classic" in this case, and there would be some degree of negative backlash to almost any "improvements" I made. There will still be a couple tiny tweaks, but nothing radical is changing in the basic play. It would be fun to take a small team, permanently fork it, and make a "Doom++" just for the iPhone, but that wouldn't be the best first move. Maybe later.
|
192 |
+
|
193 |
+
The iPhone interface around the game is all done nicely. Wolf Classic got dinged a bit for the blocky look of the buttons and interface components. I didn't actually see any complaints about the crappy monospace font, but it deserved some. Everything looks good now.
|
194 |
+
|
195 |
+
The initial release will be for OS 2.x, and support multiplayer over WiFi. A later release will be for 3.x only, and support bluetooth multiplayer. I looked into the possibility of 3G multiplayer, but the latencies just aren't good enough -- I see 380 or so pings from my phone to local servers. This was interesting, because I have talked to other hardware vendors that claim 3G latencies of half that. I'm not sure if there are issues with the iPhone, issues with AT&T's network in Dallas, or if the vendor was just mistaken about what they were getting. One anecdotal report is that iPhones work better in Japan than here, so it may be infrastructure.
|
196 |
+
|
197 |
+
I will probably have another update later with more technical details about the logic behind the new rendering architecture (rewritten for > 2x the speed of the original prBoom GL renderer), touch control issues, and so on.
|
198 |
+
|
199 |
+
|
200 |
+
|
201 |
+
-----------------------------------------
|
202 |
+
John Carmack's .plan for Nov 03, 2009
|
203 |
+
-----------------------------------------
|
204 |
+
|
205 |
+
iPhone development: Doom Classic
|
206 |
+
|
207 |
+
Source: http://www.idsoftware.com/doom-classic/doomdevelopment.htm
|
208 |
+
|
209 |
+
Way back in March when I released the source for Wolfenstein 3D Classic, I said that Doom Classic would be coming "real soon", and on April 27, I gave a progress report: http://www.idsoftware.com/iphone-doom-classic-progress/
|
210 |
+
|
211 |
+
I spent a while getting the multiplayer functionality up, and I figured I only had to spend a couple days more to polish things up for release.
|
212 |
+
|
213 |
+
However, we were finishing up the big iPhone Doom Resurrection project with Escalation Studios, and we didn't want to have two Doom games released right on top of each other, so I put Doom Classic aside for a while. After Doom Resurrection had its time in the sun, I was prepared to put the rest of the work into Doom Classic, but we ran into another schedule conflict. As I related in my Wolf Classic notes http://www.idsoftware.com/wolfenstein-3d-classic-platinum/wolfdevelopment.htm , Wolfenstein RPG for the iPhone was actually done before Wolfenstein Classic, but EA had decided to sit on it until the release of the big PC / console Wolfenstein game in August.
|
214 |
+
|
215 |
+
I really thought I was going to go back and finish things up in September, but I got crushingly busy on other fronts. In an odd little bit of serendipity, after re-immersing myself in the original Doom for the iPhone, I am now working downstairs at Id with the Doom 4 team. I'm calling my time a 50/50 split between Rage and Doom 4, but the stress doesn't divide. September was also the month that Armadillo Aerospace flew the level 2 Lunar Lander Challenge:
|
216 |
+
|
217 |
+
http://www.armadilloaerospace.com/n.x/Armadillo/Home/News?news_id=368
|
218 |
+
|
219 |
+
Finally, in October I SWORE I would finish it, and we aimed for a Halloween release. We got it submitted in plenty of time, but we ran into a couple approval hiccups that caused it to run to the very last minute. The first was someone incorrectly thinking that the "Demos" button that played back recorded demos from the game, was somehow providing demo content for other commercial products, which is prohibited. The second issue was the use of an iPhone image in the multiplayer button, which we had to make a last minute patch for.
|
220 |
+
|
221 |
+
@Release notes@
|
222 |
+
|
223 |
+
Ok, the game is finally out (the GPL source code is being packaged up for release today). Based on some review comments, there are a couple clarifications to be made:
|
224 |
+
|
225 |
+
Multiplayer requires a WiFi connection that doesn't have UDP port 14666 blocked. I'm quite happy with the simple and fast multiplayer setup, but it seems like many access points just dump the packets in the trash. If the multiplayer button on the main menu doesn't start pulsing for additional players after the first player has hit it, you won't be able to connect. I have also seen a network where the button would pulse, but the player would never get added to the player list, which meant that somehow the DNS packets were getting through, but the app packets weren't. It works fine on a normal AirPort install... More on networking below.
|
226 |
+
|
227 |
+
I took out tilt-to-turn just to free up some interface screen space, because I didn't know anyone that liked that mode, and my query thread on Touch Arcade didn't turn up people that would miss it a lot.
|
228 |
+
|
229 |
+
Evidently there are a few people that do care a lot, so we will cram that back in on the next update. The functionality is still there without a user interface, so you can enable it by four-finger-tapping to bring up the keyboard and typing "tiltturn 4000" or some number like that, and it will stay set. Make sure you have tiltmove pulled down to 0. I never got around to putting in a real console, but you can change a few parameters like that, as well as enter all the original doom cheat codes like IDDQD, IDKFA, etc.
|
230 |
+
|
231 |
+
I think that the auto-centering control sticks in Doom Classic are a better control scheme than the fixed sticks from Wolf Classic. The advice for wolf was to adjust the stick positions so that your thumbs naturally fell in the center point, so I just made that automatic for Doom. Effective control always involved sliding your thumbs on the screen, rather than discretely tapping it, and this mode forces you to do that from the beginning. Still, even if the new mode is some fraction "better", there are a lot of people who have logged a lot of hours in Wolfenstein Classic, and any change at all will be a negative initially. In the options->settings menu screen, there is a button labeled "Center sticks: ON" that can be toggled off to keep the sticks fixed in place like in Wolf.
|
232 |
+
|
233 |
+
A subtle difference is that the turning sensitivity is now graded so that a given small movement will result in a specific percentage increase in speed, no matter where in the movement range it is. With linear sensitivity, if you are 10 pixels off from the center and you move your thumb 10 pixels farther, then the speed exactly doubles. If you are 50 pixels off from the center, the same 10 pixel move only increases your turning rate by 20%. With ramped sensitivity, you would get a 20% (depending on the sensitivity scale) increase in speed in both cases, which tends to be better for most people. You can disable this by toggling the "Ramp turn: ON" option off.
|
234 |
+
|
235 |
+
In hindsight, I should have had a nice obvious button on the main options screen that said "Wolfenstein Style" and had the same options, but I have always had difficult motivating myself to do good backwards compatibility engineering. Even then, the movement speeds are different between the games, so it wouldn't have felt exactly the same.
|
236 |
+
|
237 |
+
It was a lot of fun to do this project, working on it essentially alone, as a contrast to the big teams on the major internal projects. I was still quite pleased with how the look and feel of the game holds up after so long, especially the "base style" levels. The "hell levels" show their age a lot more, where the designers were really reaching beyond what the technology could effectively provide.
|
238 |
+
|
239 |
+
@Future iPhone work@
|
240 |
+
|
241 |
+
We do read all the reviews in the App store, and we do plan on supporting Doom Classic with updates. Everything is still an experiment for us on the iPhone, and we are learning lessons with each product. At this point, we do not plan on making free lite versions of future products, since we didn't notice anything worth the effort with Wolfenstein, and other developers have reported similar findings.
|
242 |
+
|
243 |
+
We have two people at Id that are going to be dedicated to iPhone work. I doubt I will be able to personally open Xcode again for a few months, but I do plan on trying to work out a good touch interface for Quake Classic and the later 6DOF games. I also very much want to make at least a tech demo that can run media created with a version of our idTech 5 megatexture content creation pipeline. I'm not sure exactly what game I would like to do with it, so it might be a 500 mb free gee-whiz app...
|
244 |
+
|
245 |
+
Wolfenstein Classic Platinum was a break-in opportunity for the new internal iPhone developers. We were originally planning on making the Spear of Destiny levels available as in-app purchased content. Then we decided to make it a separate "Platinum Edition" application at a reasonable price. Finally, we decided that we would just make it a free update, but something has gone wrong during this process -- people who buy the app for the first time get everything working properly, but many people who upgrade the App from a previous purchase are seeing lots of things horribly broken. We are working with Apple to try to debug and fix this, but the workaround is to uninstall the app completely, then reload it. The exciting thing about Wolf Platinum is the support for downloadable levels, which is the beta test for future game capabilities. Using a URL to specify downloadable content for apps is a very clean way to interface to the game through a web page or email message.
|
246 |
+
|
247 |
+
The idMobile team is finishing up the last of the BREW versions of Doom 2 RPG, and work has started on an iPhone specific version, similar to the Wolfenstein RPG release. The real-time FPS games are never going to be enjoyable for a lot of people, and the turn based RPG games are pretty neat in many regards. If they are well received, we will probably bring over the Orcs&Elves games as well.
|
248 |
+
|
249 |
+
I want to work on a Rage themed game to coincide with Rage's release, but we don't have a firm direction or team chosen for it. I was very excited about doing a really-designed-for-the-iPhone first person shooter, but at this point I am positive that I don't have the time available for it.
|
250 |
+
|
251 |
+
@Networking techie stuff@
|
252 |
+
|
253 |
+
I doubt one customer in ten will actually play a network game of Doom Classic, but it was interesting working on it.
|
254 |
+
|
255 |
+
Way back in March when I was first starting the work, I didn't want the game to require 3.0 to run, and I generally try to work with the lowest level interfaces possible for performance critical systems, so I wasn't looking at GameKit for multiplayer. I was hoping that it was possible to use BSD sockets to allow both WiFi networking on 2.0 devices and WiFi or ad-hoc bluetooth on 3.0 devices. It turns out that it is possible, but it wasn't documented as such anywhere I could find.
|
256 |
+
|
257 |
+
I very much approve of Apple's strategy of layering Obj-C frameworks on top of Unix style C interfaces. Bonjour is a layer over DNS, and GameKit uses sockets internally. The only bit of obscure magic that goes on is that the bluetooth IP interface only comes into existence after you have asked DNS to resolve a service that was reported for it. Given this, there is no getting around using DNS for initial setup.
|
258 |
+
|
259 |
+
With WiFi, you could still use your own broadcast packets to do player finding and stay completely within the base sockets interfaces, and this might even make some sense, considering that there appear to be some WiFi access points that will report a DNS service's existence that your app can't actually talk to.
|
260 |
+
|
261 |
+
For every platform I have done networking on previously, you could pretty much just assume that you had the loopback interface and an Ethernet interface, and you could just use INADDR_ANY for pretty much everything. Multiple interfaces used to just be an issue for big servers, but the iPhone can have a lot of active interfaces -- loopback, WiFi Ethernet, Bluetooth Ethernet, and several point to point interfaces for the cellular data networks.
|
262 |
+
|
263 |
+
At first, I was excited about the possibility of multiplayer over 3G. I had been told by someone at Intel that they were seeing ping times of 180 ms on 3G devices, which could certainly be made to work for gaming.
|
264 |
+
|
265 |
+
Unfortunately, my tests, here in Dallas at least, show about twice that, which isn't worth fighting. I'm a bit curious whether they were mistaking one-way times, or if the infrastructure in California is really that much better. In any case, that made my implementation choice clear -- local link networking only.
|
266 |
+
|
267 |
+
A historical curiosity: the very first release of the original Doom game on the PC used broadcast IPX packets for LAN networking. This seemed logical, because broadcast packets for a network game of N players has a packet count of just N packets on the network each tic, since everyone hears each packet. The night after we released the game, I was woken up by a call from a college sysadmin yelling at me for crippling their entire network. I didn't have an unlisted number at the time. When I had decided to implement network gaming, I bought and read a few books, but I didn't have any practical experience, so I had thought that large networks were done like the books explained, with routers connecting independent segments. I had no idea that there were many networks with thousands of nodes connected solely by bridges that forwarded all broadcast packets over lower bit rate links. I quickly changed the networking to have each peer send addressed packets to the other peers. More traffic on the local segment, but no chance of doing horrible things to bridged networks.
|
268 |
+
|
269 |
+
WiFi is different from wired Ethernet in a few ways. WiFi clients don't actually talk directly to each other, they talk to the access point, which rebroadcasts the packet to the destination, so every packet sent between two WiFi devices is actually at least two packets over the air.
|
270 |
+
|
271 |
+
An ad-hoc WiFi network would have twice the available peer to peer bandwidth and half the packet drop rate that an access point based one does. Another point is that unlike wired Ethernet, the WiFi link level actually does packet retransmits if the destination doesn't acknowledge receipt. They won't be retransmitted forever, and the buffer spaces are limited, so it can't be relied upon the way you do TCP, but packet drops are more rare than you would expect. This also means that there are lots of tiny little ACK packets flying around, which contributes to reduced throughput. Broadcast packets are in-between -- the broadcast packet is sent from the source to the access point with acknowledgment and retransmits, but since the access point can't know who it is going to, it just fires it out blind a single time.
|
272 |
+
|
273 |
+
I experimentally brought the iPhone networking up initially using UDP broadcast packets, but the delivery was incredibly poor. Very few packets were dropped, but hundreds of milliseconds could sometimes go by with no deliveries, then a huge batch would be delivered all at once. I thought it might be a policy decision on our congested office access point, but it behaved the same way at my house on a quiet LAN, so I suspect there is an iPhone system software issue. If I had a bit more time, I would have done comparisons with a WiFi laptop. I had pretty much planned to use addressed packets anyway, but the broadcast behavior was interesting.
|
274 |
+
|
275 |
+
Doom PC was truly peer to peer, and each client transmitted to every other client, for N * (N-1) packets every tic. It also stalled until valid data had arrived from every other player, so adding more players hurts in two different ways -- more packets = more congestion = more likely to drop each individual packet. The plus side of an arrangement like this is that it is truly fair, no client has any advantage over any other, even if one or more players are connected by a lower quality link. Everyone gets the worst common denominator behavior.
|
276 |
+
|
277 |
+
I settled on a packet server approach for the iPhone, since someone had to be designated a "server" anyway for DNS discovery, and it has the minimum non-broadcast packet count of 2N packets every tic. Each client sends a command packet to the server each tic, the server combines all of them, then sends an addressed packet back to each client. The remaining question was what the server should do when it hasn't received an updated command from a client. When the server refused to send out a packet until it had received data from all clients, there was a lot more variability in the arrival rate. It could be masked by intentionally adding some latency on each client side, but I found that it plays better to just have the server repeat the last valid command when it hasn't gotten an update. This does mean that there is a slight performance advantage to being the server, because you will never drop an internal packet.
|
278 |
+
|
279 |
+
The client always stalls until it receives a server packet, there was no way I had the time to develop any latency reducing / drop mitigating prediction mechanisms. There are a couple full client / server, internet capable versions of Doom available on the PC, but I wanted to work from a more traditional codebase for this project.
|
280 |
+
|
281 |
+
So, I had the game playing well over WiFi, but communication over the Bluetooth interface was significantly worse. There was an entire frame of additional latency versus WiFi, and the user mode Bluetooth daemon was also sucking up 10% of the CPU. That would have been livable, but there were regular surges in the packet delivery rate that made it basically unplayable.
|
282 |
+
|
283 |
+
Surging implies a buffer somewhere backing up and then draining, and I had seen something similar but less damaging occasionally on WiFi as well, so I wondered if there might be some iPhone system communication going on. I spent a little while with WireShark trying to see if the occasional latency pileup was due to actual congestion, and what was in the packets, but I couldn't get my Macbook into promiscuous WiFi mode, and I didn't have the time to configure a completely new system.
|
284 |
+
|
285 |
+
In the end, I decided to just cut out the Bluetooth networking and leave it with WiFi. There was a geek-neatness to having a net game with one client on WiFi and another on Bluetooth, but I wasn't going to have time to wring it all out.
|
286 |
+
|
287 |
+
|
288 |
+
|
johnc_plan_2010.txt
ADDED
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----------------------------------------
|
2 |
+
John Carmack's .plan for Oct 26, 2010
|
3 |
+
-----------------------------------------
|
4 |
+
|
5 |
+
RAGE on iPhone/iPad/iPod
|
6 |
+
|
7 |
+
Source: http://www.bethblog.com/index.php/2010/10/29/john-carmack-discusses-rage-on-iphoneipadipod-touch/
|
8 |
+
|
9 |
+
@RAGE for iPhone@
|
10 |
+
|
11 |
+
Our mobile development efforts at id took some twists and turns in the last year. The plan was always to do something RAGE-related on the iPhone/iPad/iPod touch next, but with all the big things going on at id, the mobile efforts weren�t front and center on the priority list. There had been a bit of background work going on, but it was only towards the end of July that I was able to sit down and write the core engine code that would drive the project.
|
12 |
+
|
13 |
+
I was excited about how well it turned out, and since this was right before QuakeCon, I broke with tradition and did a live technology demo during my keynote. In hindsight, I probably introduced it poorly. I said something like �Its RAGE. On the iPhone. At 60 frames a second.� Some people took that to mean that the entire PC/console game experience was going to be on the iPhone, which is definitely not the case.
|
14 |
+
|
15 |
+
What I showed was a technology demo, written from scratch, but using the RAGE content creation pipeline and media. We do not have the full RAGE game running on iOS, and we do not plan to try. While it would (amazingly!) actually be possible to compile the full-blown PC/console RAGE game for an iPhone4 with some effort, it would be a hopelessly bad idea. Even the latest and greatest mobile devices are still a fraction of the power of a 360 or PS3, let alone a high end gaming PC, so none of the carefully made performance tradeoffs would be appropriate for the platform, to say nothing of the vast differences in controls.
|
16 |
+
|
17 |
+
What we do have is something unlike anything ever seen on the iOS platforms. It is glorious, and a lot of fun. Development has been proceeding at high intensity since QuakeCon, and we hope to have the app out by the end of November.
|
18 |
+
|
19 |
+
The technical decision to use our megatexture content creation pipeline for the game levels had consequences for its scope. The data required for the game is big. Really, really big. Seeing Myst do well on the iPhone with a 700 meg download gave me some confidence that users would still download huge apps, and that became the target size for our standard definition version, but the high definition version for iPad / iPhone 4 will be around twice that size. This is more like getting a movie than an app, so be prepared for a long download. Still, for perspective, the full scale RAGE game is around 20 gigs of data with JPEG-XR compression, so 0.7 gigs of non-transcoded data is obviously a tiny slice of it.
|
20 |
+
|
21 |
+
Since we weren�t going to be able to have lots of hugely expansive levels, we knew that there would be some disappointment if we went out at a high price point, no matter how good it looked. We have experimented with a range of price points on the iPhone titles so far, but we had avoided the very low end. We decided that this would be a good opportunity to try a $0.99 SD / $1.99 HD price point. We need to stay focused on not letting the project creep out of control, but I think people will be very happy with the value.
|
22 |
+
|
23 |
+
The little slice of RAGE that we decided to build the iPhone product around is �Mutant Bash TV�, a post apocalyptic combat game show in the RAGE wasteland. This is the perfect setup for a quintessential first person shooter game play experience � you pick your targets, aim your shots, time your reloads, dodge the bad guys, and try and make it through to the end of the level with a better score than last time. Beyond basic survival, there are pickups, head shots, and hit streak multipliers to add more options to the gameplay, and there is a broad range of skill levels available from keep-hitting-fire-and-you-should-make-it to almost-impossible.
|
24 |
+
|
25 |
+
A large goal of the project has been to make sure that the levels can be replayed many times. The key is making the gamplay itself the rewarding aspect, rather than story progression, character development, or any kind of surprises. Many of the elements that made Doom Resurrection good the first time you played it hurt the replayability, for instance. RAGE iOS is all action, all the time. I have played the game dozens of times, and testing it is still fun instead of a chore.
|
26 |
+
|
27 |
+
@Technical Geek Details@
|
28 |
+
|
29 |
+
The id Tech 5 engine uses a uniform paged virtual texture system for basically everything in the game. While the algorithm would be possible on 3GS and later devices, it has a substantial per-fragment processing cost, and updating individual pages in a physical texture is not possible with PVRTC format textures. The approach used for mobile RAGE is to do the texture streaming based on variable sized contiguous �texture islands� in the world. This is much faster, but it forces geometric subdivision of large surfaces, and must be completely predictive instead of feedback reactive. Characters, items, and UI are traditionally textured.
|
30 |
+
|
31 |
+
We build the levels and preview them in RAGE on the PC, then run a profiling / extraction tool to generate the map data for the iOS game. This tool takes the path through the game and determines which texture islands are going to be visible, and at what resolution and orientation. The pixels for the texture island are extracted from the big RAGE page file, then anisotropically filtered into as many different versions as needed, and packed into 1024�1024 textures that are PVRTC compressed for the device.
|
32 |
+
|
33 |
+
The packing into the textures has conflicting goals � to minimize total app size you want to cram texture islands in everywhere they can fit, but you also don�t want to scatter the islands needed for a given view into a hundred different textures, or radically change your working set in nearby views. As with many NP complete problems, I wound up with a greedy value metric optimizing allocation strategy.
|
34 |
+
|
35 |
+
Managing over a gig of media made dealing with flash memory IO and process memory management very important, and I did a lot of performance investigations to figure things out.
|
36 |
+
|
37 |
+
Critically, almost all of the data is static, and can be freely discarded. iOS does not have a swapfile, so if you use too much dynamic memory, the OS gives you a warning or two, then kills your process. The bane of iOS developers is that �too much� is not defined, and in fact varies based on what other apps (Safari, Mail, iPod, etc) that are in memory have done. If you read all your game data into memory, the OS can�t do anything with it, and you are in danger. However, if all of your data is in a read-only memory mapped file, the OS can throw it out at will. This will cause a game hitch when you need it next, but it beats an abrupt termination. The low memory warning does still cause the frame rate to go to hell for a couple seconds as all the other apps try to discard things, even if the game doesn�t do much.
|
38 |
+
|
39 |
+
Interestingly, you can only memory map about 700 megs of virtual address space, which is a bit surprising for a 32 bit OS. I expected at least twice that, if not close to 3 gigs. We sometimes have a decent fraction of this mapped.
|
40 |
+
|
41 |
+
A page fault to a memory mapped file takes between 1.8 ms on an iPhone 4 and 2.2 ms on an iPod 2, and brings in 32k of data. There appears to be an optimization where if you fault at the very beginning of a file, it brings in 128k instead of 32k, which has implications for file headers.
|
42 |
+
|
43 |
+
I am pleased to report that fcntl( fd, F_NOCACHE ) works exactly as desired on iOS � I always worry about behavior of historic unix flags on Apple OSs. Using this and page aligned target memory will bypass the file cache and give very repeatable performance ranging from the page fault bandwidth with 32k reads up to 30 mb/s for one meg reads (22 mb/s for the old iPod). This is fractionally faster than straight reads due to the zero copy, but the important point is that it won�t evict any other buffer data that may have better temporal locality. All the world megatexture data is managed with uncached reads, since I know what I need well ahead of time, and there is a clear case for eviction. When you are past a given area, those unique textures won�t be needed again, unlike, say monster animations and audio, which are likely to reappear later.
|
44 |
+
|
45 |
+
I pre-touch the relevant world geometry in the uncached read thread after a texture read has completed, but in hindsight I should have bundled the world geometry directly with the textures and also gotten that with uncached reads.
|
46 |
+
|
47 |
+
OpenAL appears to have a limit of 1024 sound buffers, which we bumped into. We could dynamically create and destroy the static buffer mappings without too much trouble, but that is a reasonable number for us to stay under.
|
48 |
+
|
49 |
+
Another behavior of OpenAL that surprised me was finding (by looking at the disassembly) that it touches every 4k of the buffer on a Play() command. This makes some sense, forcing it to page the entire thing into ram so you don�t get broken sound mixing, but it does unpredictably stall the thread issuing the call. I had sort of hoped that they were just eating the page faults in the mixing thread with a decent sized mix ahead buffer, but I presume that they found pathological cases of a dozen sound buffers faulting while the GPU is sucking up all the bus bandwidth or some such. I may yet queue all OpenAL commands to a separate thread, so if it has to page stuff in, the audio will just be slightly delayed instead of hitching the framerate.
|
50 |
+
|
51 |
+
I wish I could prioritize the queuing of flash reads � game thread CPU faults highest, sound samples medium, and textures lowest. I did find that breaking the big texture reads up into chunks helped with the worst case CPU stalls.
|
52 |
+
|
53 |
+
There are two project technical decisions that I fretted over a lot:
|
54 |
+
|
55 |
+
Because I knew that the basic rendering technology could be expressed with fixed function rendering, the game is written to OpenGL ES 1.1, and can run on the older MBX GPU platforms. While it is nice to support older platforms, all evidence is that they are a negligible part of the market, and I did give up some optimization and feature opportunities for the decision.
|
56 |
+
|
57 |
+
It was sort of fun to dust off the old fixed function puzzle skills. For instance, getting monochrome dynamic lighting on top of the DOT3 normal mapping in a single pass involved sticking the lighting factor in the alpha channel of the texture environment color so it feeds through to the blender, where a GL_SRC_ALPHA, GL_ZERO blend mode effects the modulation on the opaque characters. This sort of fixed function trickery still makes me smile a bit, but it isn�t a relevant skill in the modern world of fragment shaders.
|
58 |
+
|
59 |
+
The other big one is the codebase lineage.
|
60 |
+
|
61 |
+
My personally written set of iPhone code includes the renderer for Wolfenstein RPG, all the iPhone specific code in Wolfenstein Classic and Doom Classic, and a few one-off test applications. At this point, I feel that I have a pretty good idea of what The Right Thing To Do on the platform is, but I don�t have a mature expression of that in a full game. There is some decent code in Doom Classic, but it is all C, and I would prefer to do new game development in (restrained) C++.
|
62 |
+
|
63 |
+
What we did have was Doom Resurrection, which was developed for us by Escalation Studios, with only a few pointers here and there from me. The play style was a pretty close match (there is much more freedom to look around in RAGE), so it seemed like a sensible thing. This fits with the school of thought that says �never throw away the code� (http://www.joelonsoftware.com/articles/fog0000000069.html ). I take issue with various parts of that, and much of my success over the years has involved wadding things up and throwing it all away, but there is still some wisdom there.
|
64 |
+
|
65 |
+
I have a good idea what the codebase would look like if I wrote it from scratch. It would have under 100k of mutable CPU data, there wouldn�t be a resource related character string in sight, and it would run at 60 fps on new platforms / 30 fps on old ones. I�m sure I could do it in four months or so (but I am probably wrong). Unfortunately, I can�t put four months into an iPhone project. I�m pushing it with two months � I have the final big RAGE crunch and forward looking R&D to get back to.
|
66 |
+
|
67 |
+
So we built on the Resurrection codebase, which traded expediency for various compromise in code efficiency. It was an interesting experience for me, since almost all the code that I normally deal with has my �coding DNA� on it, because the id Software coding standards were basically �program the way John does.� The Escalation programmers come from a completely different background, and the codebase is all STL this, boost that, fill-up-the-property list, dispatch the event, and delegate that.
|
68 |
+
|
69 |
+
I had been harboring some suspicions that our big codebases might benefit from the application of some more of the various �modern� C++ design patterns, despite seeing other large game codebases suffer under them. I have since recanted that suspicion.
|
70 |
+
|
71 |
+
I whine a lot about it (occasionally on twitter), and I sometimes point out various object lessons to the other mobile programmers, but in the end, it works, and it was probably the right decision.
|
72 |
+
|
73 |
+
|