Jump to content

databits

Members
  • Posts

    8
  • Joined

  • Last visited

About databits

  • Birthday January 2

Profile Information

  • Gender
    Male
  • Location
    Redmond, WA
  • Interests
    Game software development

Recent Profile Visitors

1013 profile views
  • Euro

databits's Achievements

Poring

Poring (1/15)

4

Reputation

  1. That was kind of part of my point. Same issue if the person IS still on the project, but ends up out on vacation or some family emergency for an unknown amount of time. Nameless2you, I did read the post, entirely in fact. It's why I deliberately said, "..add or edit...". I myself don't distinguish between the two. Code is code, regardless if it's a new addition, a re-factor, or just bug corrections. The whole point of commenting is to explain what a piece of code is supposed to be doing. If you need to track down an author for any reason to explain it, then it usually means the code probably isn't commented properly, is poorly written, or both (assuming you're actually a programmer and can read the code to begin with). [edit]: added missing space
  2. Actually, at my university we'll be doing lectures which will be recorded in HD on how to use both Mercurial and GIT (separate lectures). While the lectures are meant for developers, end users may be able to benefit from them as well. If it'd help people, once we've completed the lectures, the video editing, and put them online, I can post a link to them for anyone who's interested in learning how to use them (they'll likely be on YouTube).
  3. I respect your opinion on putting names in comments, but I still don't agree. If people are working on an open source project solely for attention and gratitude, then there's already an issue. You don't need your name next to every piece of code you add or edit to receive thanks for contributing to the project, and as Lighta pointed out, it does no good having a name for someone retired from the project. Besides, most users of the software won't even see it, or if they do they likely won't care. There's a reason that many other projects out there have something akin to a credits page or document. What happens if the section of code you wrote ends up being replaced or removed? Should you no longer receive credit for contributing to the project? Well if you're in a credits page/document rather than just a name strewn around the code, it's still shown that you did contribute, even if your contribution is no longer used. As for the bad commit messages, that's just people not really knowing how to use version control. Commit early, commit often. Small, concise commits with a short description on what the commit does. A merge message should be exactly that, a merge. If there's anything other than merging and conflict resolution in it, then the developer should just not do that. Just because something is open source doesn't mean people working on it should ignore good practices.
  4. Good code documentation is a great idea, but only when done properly. It's as simple as following a few basic rules. Intention (What's the code supposed to do) Don't comment individual lines of code. Instead comment entire sections or blocks with a short explanation on what that portion of code is supposed to do (not what it does). For example, in the code posted above, avoid things like this: default: //do nothing break; This comment is meaningless. It's obvious that the default condition does nothing, the comment itself serves no purpose. Consistency Try to pick a standard to use for the project and stick to it. For example, don't do things like this: //Comment 1 versus // Comment 2 This is a little harder to do on group projects, but it's not really any different than bracket and white-space usage. People working on the code should adapt to the style the project uses. Authoring and Credit Don't write your name in the comments by sections of code that you modify or add. eAthena has stuff like this strewn throughout it, and it serves no purpose to the code itself. If you need to find who changed a line, it's not hard to look at the history of the file in version control. If each file needs to have a list of authors, it should be at the top in the file comment header section. You don't even need a comment on what was changed, again you can refer to commit logs.
  5. Actually, GIT is very well proven, what do you think changes to the Linux kernel are tracked with? The linux kernel dwarfs *Athena both in size (millions of lines of code) and number of developers. Mercurial is also well proven, and has far better cross-platform support than GIT with all its benefits. As for the argument against people updating their servers, it's about as simple as installing the software and checking out the new repo, essentially zero difference. Sure, people who made custom modifications would have a harder time changing, but no more than if they had switched from eA to rA (most people making custom modifications are probably advanced enough to make the migration anyhow). Under windows the tools are similar enough to SVN that migration would be pretty easy for most people (TortoiseHG actually has a far better interface than TortoiseSVN). For a small project, such as *Athena, I can fully attest that using a distributed system does indeed improve workflow immensely. I'm a senior full-time student at DigiPen, and we not only do a ton of individual projects for our CS degree program, we also have 4 individual years of small to large team collaborative game projects (my last team in Fall was 12 people). When it all comes down to it, HG/GIT beat SVN hands down for every case I've used it for. First and foremost, a distributed source control system doesn't have a single point of failure like SVN does. Switching to a mirror if another goes down is a pain in the ass in SVN, where in GIT/HG you just add another remote server entry and go. We actually had a situation like that last semester when the school repository server died for us near the end of the semester, we seamlessly migrated to bitbucket within a few minutes. Then just synced it back to the schools servers when they were back online. But during my first year something similar happened while we were using SVN and it caused the entire game team to be dead in the water. Now I will admit that there are some issues with distributed systems like GIT/HG. But these are issues that *Athena will never run into. Generally you run into problems when you need to often update large files. No I'm not talking a large code file (trivial because they can be compressed very well), but things like game art, audio, and video assets (which a server has none of). Now when it comes down to doing something like, say, a complete refactor of how an entire system works, this is where GIT and HG really shine. You can easily create a local branch and work on the changes, while keeping it merged and up-to-date with upstream changes. Then when you're done, you can easily merge those changes back into the main branch and close your branch. At no point outside pulling upstream changes are you required to communicate with the server itself, hell not even then (you can sync from -any- copy of the repository). The choice to commit the actual branch upstream is totally up to you. Also, comparing the size of the GIT vs. SVN repository isn't really fair. You need to actually understand how each technology works, but overall GIT/HG would end up being larger repos in the long run. Sure, initially it may be smaller, but keep in mind that since it's a distributed source control, even though the data is compressed, it's all there. You can roll all the way back to the very first commit in GIT if you want to without ever contacting the server (which is why it has problems with art assets and such). Essentially, it retains -everything- while SVN only checks out the latest copies of the files locally. For the arguments about using both, I'd highly suggest against doing that. The majority of your users are just going to be doing a clone checkout and running with it, with occasional updates. Hell, most of them probably don't even want to do that, they'd rather download and unpack a zip file. Very few people will likely be making pushes to change the code itself. Using two version control systems just means that you get to manually apply patch files between the systems, which is why doing both is a nightmare (as other devs pointed out). Though as a counter argument, setting up TortoiseGIT is far harder for most people than TortoiseSVN and TortoiseHG. You can't just install it and go, you have to install both TortoiseGIT *AND* MSysGit to use it (which can have many problems of its own). The ease of use is why I'd suggest Mercurial over GIT. It's essentially the exact same thing, just with better tools and simple setup.
  6. I'm well aware of the warnings, you you get a crap load of the warnings when compiling under Linux with GCC (mostly casts). Yes, because it's being compiled as C++, not C, I get that. In the case of Visual Studio, it's not the compiler that chooses, it's the build system for VS that chooses. Microsoft has 2 compilers, their C compiler, and their C++ compiler. The C compiler for MSVS hasn't had a major update in a long time, where as each new release to VC (now 2011) adds new C++ features, optimizations, and improvements to the C++ compiler. Sticking to strict C, at least for the case of the VS builds, is sort of self-hindering. I'm not saying that you have to switch to C++, it's more of a suggestion than anything.
  7. Glad to hear there's more support for GIT. I love it, and use it for a lot of projects, but be wary that the Mercurial tools are far superior under Windows (TortoiseHG) than the GIT tools (Msysgit + TortoiseGit). Though that may change soon enough. After talking to the GitHub devs at GDC this year, they seem to really be gunning for better Windows GIT support, so we may see those sooner than later. As for switching over to C++ from C, it really isn't suicide. eA isn't really pure C to begin with as it is, and I'd wager it's probably already compiled as C++ in almost every case (especially in VC since it hasn't even updated its C compiler in ages). Gradual re-factoring over time is possible in any project, more so with C/C++. The problem is finding out what portions of the source code itself do, since as previously pointed out in other posts, it's not exactly well documented what everything does (not exactly self documenting code either). A lot of the code could really stand to be cleaned up with proper interfaces and abstraction. In either case, a lot of it's just thoughts. I WOULD suggest that at the very least you do the same thing I've been working on. Combine items in storage, guild storage, carts, inventory, equips, and drops into a single item instance object. There shouldn't be multiple notions of the exact same data strewn all over the server. It's one of the things that I've been working to clean up. The problem is, when I forked things, not only did I not stick with SVN (moved to GIT, possibly may move again to Hg), I also wiped out portions of the code that I myself had no intention of using as a server operator (for instance, among other things, I removed both auction house and mercenary support from my server, as I never intend on having them in the server).
  8. I just today found out about this project, and it seems interesting to say the least. Awhile back I went ahead and forked the eA project personally as well. The goal was to re-architect both the SQL database design, and actual SQL bindings in the server itself. Primarily because both the database schema, and the C libmysql database bindings used are horrible. They're also missing a lot of critical information (date and time info for creating and modification please?), introduce obvious points of failure (when I saw in the source code that there are 4 separate notions of an item instance it was quite an amusing day), and use prepared statements incorrectly. Forking and fixing was my alternative to writing database views for -every- table in eA. So then I came across this project, and have been reading through some things here and there. As I said, I find it interesting that this project has started while eA has more or less died off. But also have a few questions. My first question is, has anyone thought about switching from SVN to a more robust version control system like GIT or Mercurial? Ok, sure, the SVN repository fork allows you to keep around all the old commit logs (though you can import SVN into other systems), but there are so many more advantages to using an actual distributed source control system over SVN now days. What's more, the tools (for Mercurial at least) are better in Windows, OSX, and Linux. Also, you can use both Github and Bitbucket for respository hosting which tend to have better repository interfaces than sourceforge. The second question is, would you entertain the idea of abandoning the out of date "strict C" ideal? Now days there's no real good reason to write things in C. In many cases, it makes things harder to read, and requires more code to write. It's not like you'd be running this on hardware that could only support ANSI C stuff either. You also lose out on all the advantages of newer C++ language features, and possibly some optimizations. Though, I somehow doubt any single person is actually compiling eA or rA with a C compiler. One reason that I bring these up is the debate about keeping support for pre-RE while still working on RE. Under both GIT and Mercurial this is trivial because the branching and re-merging support is far easier to use than SVN's. If things were abstracted out better than they are now, it should be possible to maintain support for both, while also allowing bug corrections to both branches at the same time for shared code. It's not like new code would need to be written for pre-renewal servers. Outside or major refactoring, at this point it's mostly bug support and adding features/improvements that could be used for both branches. Additionally, while I support the idea of dropping flat-text files, keep in mind that the servers should not be reading or writing directly to/from the flat files or database all over the place. This should be abstracted using a common interface to serialize/deserialize information. Most the server itself should more or less be ignorant to how the data is read/written. This allows for using any back end for reading and storing the data, be it flat-text, MySQL, MsSQL, Postgresql, Oracle, SQLite, XML, JSON, or whatever format you choose. Though from my experience in the eA code, revising at that level would be a major task (and pain in the ass)... but not one taken lightly if you're truly serious about improving the server. Another idea would be to start consolidating things more toward a component-based design. Though this would be an even more massive level change that would take a ridiculous amount of time (it may be easier to just start writing a new server from the ground up and piecing in code), it would vastly improve the server itself and make it a hell of a lot easier to extend game objects with new functionality.
×
×
  • Create New...