I have a small list of improvements to make already:
All SWProxy and PCAP uploads are now processed through a queue system to provide a better experience during Free Rune Removal day. The server should remain usable, with the slight downside that you might have to wait a little bit to get your data imported. The average profile takes about 20 seconds to import, so your wait time should not be very long in any case.
The queue is lacking a few features I will be working on soon, such as position in queue or average wait time. I just wanted to get the system operational before FRR day to work out any issues before the load hit and the site went down.
The wish rune reward is finally being parsed correctly after I got my hands on a full data log of the wish. The logs unfortunately have to be reset to ensure the data is accurate. Be sure to restart your proxies!
Restart your proxies! The early data logs will be used to check that the system is working properly and may be reset if anything is broken. Once a sufficient sample of data has been collected I can begin creating the reports. If you have any suggestions on the format or something you'd like to see, drop a feedback or raise an issue on github
The raid logs will fill in at an accelerated rate because I am able to collect the data on all 3 player rewards, not just your own. Steps have been taken to prevent duplicate data when multiple people are logging the same raid.
Wish logging is incomplete at this moment until someone gets a rune, then I can examine the data format and finish the parsing code.
The entirety of the source code running SWARFARM is now available on GitHub!. The door is open to contributors if you want to dig into the code, or you can simply raise an issue if you find a bug or have an idea. If you don't have or want a GitHub account, the feedback section is still open.
Additionally, the raw data powering the data logs is now available for download (anonymized, of course). I think it's only fair that this data is available to the community because it would not exist without the contributions of the community in the first place.
The data is in JSON format due to the nested and variable nature of the recorded data. If you need CSV format for ease of use, there are several converters online that can do this for you. How well this conversion works is up in the air. The data files are sorted into folders by category, and 1 file is published per week of data. Every Sunday at midnight Eastern time, new data will be exported.