Jump to content

ckx_

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ckx_'s Achievements

Santa Poring

Santa Poring (3/15)

  • First Post
  • One Year In
  • Conversation Starter
  • One Month Later
  • Dedicated

Recent Badges

0

Reputation

  1. Does anyone how how to get rid of the "ATK 0 - 0 DEF: 0 - 0" pop-up in shops? Shown in the bottom right. Thanks.
  2. I have custom refine costs setup in refine_db.yml. When I use the RefineUI, it shows me the correct cost for refining on the initial screen, the one where you select a weapon. Then, after each refine, it shows the previous levels cost under the option to continue refining. Is this normal? To illustrate what I mean, I've made a short video: http://tanasinn.one/pix/Screencast_20240226_115239-2.webm After the first refine, it shows a cost of 1000, but really +1->+2 is 2000. This pattern continues throughout the refine process, e.g. if level +2->+3 is 3,000z, it will show the previous level's cost of 2,000z. Then it'll update to 3,000z for +3->+4, even if that cost is higher still. The Refine UI always shows the next level's refine cost as the previous level's. Note that if you press "Back" and go back to the initial refine window where you select materials, the cost is displayed correctly. Here's an example of my first two refine levels for level 1 weapons in the yaml DB: - Group: Weapon Levels: - Level: 1 RefineLevels: - Level: 1 Bonus: 200 Chances: - Type: Normal Rate: 10000 Price: 1000 Material: RGX_Steel - Level: 2 Bonus: 400 Chances: - Type: Normal Rate: 10000 Price: 2000 Material: RGX_Steel I think the cost is sent to the client in clif.cpp's clif_refineui_info function. Near the bottom of that function is a for loop that setups the packet, with the following block of interest: if( cost != nullptr ){ p->req[count].itemId = client_nameid( cost->nameid ); p->req[count].chance = (uint8)( cost->chance / 100 ); p->req[count].zeny = cost->zeny; p->packetLength += sizeof( struct PACKET_ZC_REFINING_MATERIAL_LIST_SUB ); count++; } clif_refineui_info is called after every press of the "Refine" button, so I would have expected this to be where the client receives the price for the next refine. Debugging shows that the correct cost is being sent to the client on each Refine press. So now I'm thinking the issue must be client side. Anyone got any insight into this?
  3. Interesting... So you can confirm with 100% certainty that the bug doesn't occur when running without other applications in Admin mode & never alt tabbing? When you run another process as admin, does it trigger even if you don't alt-tab? And vice versa, does it occur if you alt-tab with no other admin processes open? The client I play with is 2020-07-15bRagexe. Thanks for your input. EDIT: Hmm, I booted into Windows and did some playtesting. I got this bug on Payon Dungeon F1 without ever having lost focus, on a fresh boot with nothing else open. For me I feel like it happens sometimes when effects get spammy, but it's like.... really inconsistent... I just don't know...
  4. Thanks for the anecdote, but I play via Proton on Fedora without any special privileges involved. I'm not convinced process privileges are a factor, and it can occur even on a fresh client without having alt tabbed. Fullscreen vs Windowed might be a relevant thing, I'm not sure, but I want mitigation techniques that aren't tied to user behavior. @refresh is a server-side command for synchronization of client view data with the server-side source of truth; it doesn't have anything to do with the purely client-side lightmap rendering (and doesn't help). I'm more wondering about solutions/workarounds that might be a part of the map lighting, or if we understand the trigger criteria. I notice that I can tell when the lightmap glitch will occur because I get some 3D artifacting right before it happens, but beyond that I don't have much insight into it myself.
  5. I'm sure we're all familiar with the bug in the attached screenshot; in which your client's lightmap rendering has some catastrophic failure and turns all map textures into a pure white flash bang. The common workaround for this is to type /lightmap to disable lightmaps, then live with no lighting until you can do a client restart. But do we know what triggers this state? Are there any known workarounds to mitigate it (perhaps to a map's shadow or lighting data)? Are we forever forsaken to /lightmap eating a macro slot and needing to find a moment to restart the client?
  6. If you ever find the time & inclination, some pointers on what to do for player attacks would be appreciated, too; I've figured it out server-side, but have not yet started investigation on the clientside. Thank you.
  7. I made a custom Act Editor script to assist in the trivial cases where setting SoundId to "atk" is enough. I figured I'd post it here to save anyone else a few moments, if they ever decide to take up the task of more accurate feeling damage timings: using System; using ErrorManager; using GRF.FileFormats.ActFormat; using GRF.Image; namespace Scripts { public class Script : IActScript { public object DisplayName { get { return "Sound ID Replication"; } } public string Group { get { return "Custom Scripts"; } } public string InputGesture { get { return "Ctrl-Alt-Shift-A"; } } public string Image { get { return "settings.png"; } } public void Execute(Act act, int selectedActionIndex, int selectedFrameIndex, int[] selectedLayerIndexes) { if (act == null) return; string errorString = string.Empty; try { act.Commands.Begin(); System.Collections.Generic.List<int> soundIds = new System.Collections.Generic.List<int>(); foreach (var frame in act[selectedActionIndex].Frames) { soundIds.Add(frame.SoundId); } int start_index = selectedActionIndex; for (int i = selectedActionIndex; (i%8)!=0;i++) { start_index = i-7; } int end_index = start_index+7; for (int i = start_index; i < end_index+1; i++) { int actionIndex = i; if (actionIndex == selectedActionIndex) { continue; } GRF.FileFormats.ActFormat.Action action = act[actionIndex]; if (action.NumberOfFrames != soundIds.Count) { errorString += "Frame count mismatch on action index " +actionIndex+ ". Expected " + soundIds.Count + " frames, but got " + action.NumberOfFrames + "." + System.Environment.NewLine; continue; } for (int j = 0; j < action.Frames.Count; j++) { int frameIndex = j; act.Commands.SetSoundId(actionIndex, frameIndex, soundIds[frameIndex]); } } if (errorString != string.Empty) { System.Windows.Forms.MessageBox.Show(errorString, "Frame count mismatch", System.Windows.Forms.MessageBoxButtons.OK, System.Windows.Forms.MessageBoxIcon.Exclamation); } } catch (Exception err) { act.Commands.CancelEdit(); ErrorHandler.HandleException(err, ErrorLevel.Warning); } finally { act.Commands.End(); act.InvalidateVisual(); act.InvalidateSpriteVisual(); } } public bool CanExecute(Act act, int selectedActionIndex, int selectedFrameIndex, int[] selectedLayerIndexes) { return act != null; } } } The intention is to setup your selected Action Index as your "base" index, and run the script. It will replicate the SoundIds for each frame over to the other relevant action indices—If a frame count mismatch occurs between the base index and another action index (relatively rare, but happens), it skips all mismatched indices and throws a message, so you can handle 'em manually afterwards. I am not backing up the act here, as I have my own backup flow going on, so you might want to re-add the act backup command from the sample script if you use this. EDIT: Generalized script to make it work on any action type, not just attack actions at indices 16~23.
  8. Thank you for the detailed breakdown. This is exactly what I was looking for. I had noticed the behavior of the damage sound file, but still found it inconsistent; the rest of your post clarifies it greatly. I've done a little work on that front for my server (currently still in development). I added an "AmotionActive" property to monsters and skills, an int value that gets used by battle_delay_damage to determine when _sub should get called. It is used to calculate the percentage of an AttackMotion where the damage should be "active", i.e. at what point in the attack animation should HP be deducted from a player. The vanilla timer for delayed damage looks like this: add_timer(tick+amotion, battle_delay_damage_sub, 0, (intptr_t)dat); My modified call looks more like this (omitted safety checks for brevity): int dmgdelay = amotion; if (dmgdelay > 0) { int a_active; if (src->type==BL_MOB && !skill_id) { // Mob normals are handled on a case by case basis a_active = ((TBL_MOB*)src)->db->amotion_active; } else if (skill_id > 0) { a_active = skill_get_amotionactive(skill_id); } else { // General cases get amotion reduced by the default amotion active value. a_active = AMOTION_ACTIVE; } dmgdelay = (a_active * dmgdelay)/100; add_timer(tick+dmgdelay, battle_delay_damage_sub, 0, (intptr_t)dat); So if Willow's full amotion is 700, and I set its AmotionActive property to 33, that will put the battle_delay_damage_sub timer at tick+231 (33% of 700), causing the HP to be subtracted at roughly that point in the animation. This is fairly simple, and allows me to specify an arbitrary point of an amotion where damage is actually dealt—It works well to eliminate "laggy" feeling damage (whether server-side damage is too early, or too late compared to the animation). I've also modified some of the flow around when clif_damage gets called to make things feel more responsive for this work. That's all good, but up until now I've been restricted by my inability to define at what point the client displays the hitstun/damage, so I've just been doing my best to match up the server-side damage delays with the flinches defined by Gravity; but with this newfound knowledge, I'll have control of both ends of the equation, and be able to create more responsive feeling combat where things deal damage when they actually hit you. Thanks a bunch, and thanks for the tooling that makes this stuff simple.
  9. After you call clif_damage on the server, display of animations and damages seems to be in the client's hands, unless I've missed something. So my question is: With any given attack animation, how does one determine what frame of the attack should trigger the client to do a flinch animation for the target, & display the damage number? For example, if you spawn a Willow, you might notice that it deals damage around the time the animation begins, while the willow is still winding up. In contrast, if you spawn a Condor, you'll notice that damage does not happen until nearly the end of the animation. I took a look in Act Editor to see if there were any relevant properties, but I didn't find anything. Any ideas? Thanks.
  10. I'd like to add the old Sonic Blow / Arrow Vulcan animations back to newer clients. Does anyone know where to begin on this? I believe it's a client-side change, but I am not entirely positive. Background: I'm working on a pre-renewal server that uses a lot of newer renewal client features (market shops, randomopts, achievements, etc). So far I don't have any issues other than these skill animations being removed. Removal happened in kRO on 2018/12/19 (patch notes). There are some old threads on the forums about this, but they're all dead ends. I'll link them anyway: Ref1, Ref2. The Herc Forums also have a thread for this. If anyone has any info on this, let me know. Likewise I'll update the thread if I get anywhere myself.
×
×
  • Create New...