XMR-STAK vs CastXMR – who is more profitable?

XMR-STAK vs CastXMR - Who will win?

I suppose any good Rx Vega mining guide needs to objectively discuss and evaluate the prominent Vega mining software options….  both of which have a faithful following within the Vega Mining community.  This will present an honest and objective assessment. Here we go!
(Note: I don’t know who the winner will be as I write this intro… Let’s see how this shakes out)

The Contenders

In Corner #1 – CastXMR ver 0.7    (Note: ver 0.8.1 is now out, unsure of performance delta)
The self proclaimed “Fastest miner for AMD Radeon RX Vega GPU series”.  CastXMR is closed source software and has a development fee of 1.5%.

In Corner #2 – XMR-STAK ver 2.0.0.   (Note: ver 2.2.0 is now out, +40h/s on 64’s)
XMR-STAK has no baked in special internal tuning for Vega GPU’s but offers configuration file options which enable self tuning.  Some dual thread configurations have become pretty standard such that I think it’s fair to call it “Vega tuned” from the perspective of this comparison.  XMR-Stak is open source and has a default development fee of 2% (.5% higher).

Discussion of Bias:  

Here are the facts on my bias going into this…  I dabbled with both programs when I initially setup my miner.  I think most would agree that CastXMR is a bit more noob friendly in that the start command is a single line and the tuning is built into the software.  I was a noob who was also willing to tweak so my initial focus was mainly features and performance.  My initial results gave a trivially small performance benefit for xmr-stak but the real tiebreaker for me was the web interface.  I was initially dealing with some hash drop issues with both programs and appreciated xmr-stak enabling me to quickly check miner status on my phone…. one might call it an addiction :-).  Anyway, as many of you know, besides the fact that hash drop for me is now a more rare occurrence, there have been two major changes since those “early” days:

  1. JJs_HashMonitor makes the occasional Vega hash drop a non-even.  Having a well formatted web interface is less important because the monitor self detects hash drop and resets the miners to full speed.  Awesome! (I hope you have all tipped JJ for saving your hashes!).  
  2. CastXMR new 11/29th release added a remote monitoring capability (Woot!).  While JJs_HashMonitor is not yet configured to support CastXMR… that is simply a fork away and thus I am willing to seed it as an existing capability for the purpose of this comparison. 

I have no vested interest in either software so it’s fair to call this an honest assessment because given (1) and (2) above, I don’t know what my motivation would be for bias. Like everybody else I am just trying to find the most effective software for my rig.

Before going any other farther I also want to say that I recommend you view this post as a guide on a method you can use to make a performance assessment for your particular rig.  Each rig is different.  The analysis below will discuss my method and my numbers…  It will stop short of making any recommendations because as always… YMMV.

Discussion of CastXMR Reported Values:

I have to start with a discussion a few number irregularities regarding the hash rate waterfall that scrolls across the CastXMR screen.  The figures below are from a Monero mining session that started at 9:55PM.  I let the program run unobstructed for about 15 minutes to ensure everything was settled in and then took a screen capture.

Figure 1: CastXMR Results after 17 Minutes of Undisturbed Operation (Remote Desktop is Active)

When it’s running some pretty nice numbers are displayed which obviously contribute to the glowing reputation of CastXMR.  My Vega 64 (not serving a monitor) is GPU 0 which, in Figure 1, shows numbers of 2020.9, 2018.6, 2012.9, 2013.5, and 2013.1 h/s.

2020 h/s definitively pops out as impressive considering the steady 2002 h/s I see when I mine Monero with xmr-stak.  The average of those numbers is a bit lower 2015.8h/s but still impressive.  Score one for CastXMR… but wait…  The problem comes in the next figure.  When you type “s” to get a hash report from castXMR you get the following:

Figure 2: CastXMR self generated summary data

The average hash rate for GPU 0 as 1994.6?!?!.  How can that be because I have been watching the screen pass by and there was not one instance when the GPU0 value was less then 1994.6 h/s, let alone low enough to pair with a 2020.9 h/s to create an average of 1994.6 h/s.  That average (1994.6) is a full 20h/s lower then the average we calculated from Figure 1 (2015.8).  Curious.  Perhaps more puzzling is that directly after that summary report it gets right back to the business of displaying eye popping numbers like 2029.4 h/s.  Xmr-stak and it’s 2002 h/s can’t compete with 2029.4 h/s… but compares quite favorable with 1994.6 h/s.  Which is it?

Consider that the 5 GPU average calculated from the bottom of figure 2 is (2029.4+1930.0+1931.0+1944.7+1958.5)/5 = 1958.7 h/s.  By contrast, the average reported in the cyan “share accepted report” just above it is a more modest 1935.1 h/s.  I really don’t know what to make of the non-average numbers so I disregard them…

The figures above are not cherry picked… These are typical of the castXMR performance on my machine and it leaves me scratching my head a bit.  I do not suspect any foul play here.  I suspect the number are from real calculation but might just not represent the value we assume them to be.  Perhaps they show the absolute fastest hash calculated over the display interval… and the “s” provided average is the true average hash rate over some time period?  It is really hard to know but one thing is for sure…  castXMR has the reputation for being “fast”.  It may in fact be faster… we won’t know until we get to the bottom of this post, but whatever the case, it is not AS FAST as the reputation developed by the number that roll across the display would suggest.  Focus on the average values that is displays in the summary report (“s”) and in the periodic report that occurs when an “accepted share” is reported.

Ok, now lets get to it…

The Playing Field

I have 5 Vegas.  Two Vega 64’s and 3 Vega 56’s.  One Vega 64 (GPU 3 / Thread 6&7) serves an HDMI dongle so it’s performance does not meet that of the other Vega 64 (GPU 0 / Thread 0/1).  The Vega Miner is setup up as published guide with the three exceptions / clarifications that follow:

  • Most of the guide was written when my miner had 4 Vega’s and 1 Nvidia GTX 750.  I do explain in the guide that I have purchased another Rx Vega 56 to replace the GTX-750… and that I have added what I learned from that experience to the guide.  Some of you might not have read it since then so I wanted to repeat it for clarity.
  • In the guide (and when mining in real life) I do CPU mining in parallel with Vega Mining.  Because CastXMR does not come with a CPU miner, I do not have xmr-stak mining with the CPU during these test (it turns out it didn’t make a difference but I wasn’t sure)
  • In the guide I suggest people with monitors/dongles attached to mining Vega’s use an intensity of 1800 on both threads of that particular Vega in order to get stability.. and then work from there (vs. my standard 1932/1932).  My system GPU3 is an Rx Vega 64 that serves my HDMI dongle and it is stable with intensities of 1908 and 1800.  Those are the values you will see on GPU 3 (threads 6+7) when xmr-stak is mining.

Test Procedure:

  1. Restart the computer.  
  2. Logged in via Chrome Remote Desktop
  3. Start JJ’s Hash Monitor so it will restart my Vega’s and apply my OverdriveNTool parameters (again, the exact parameters from the guide)
  4. Closed JJs_HashMonitor and associated miner
  5. Opened Windows file manger
  6. Double click on the cmd file that sends the miner the commands it needs to start
    • cast_xmr included the –remoteaccess flag
    • xmr-stak included the –noCPU and –noNVIDIA flag
  7. Once a miner was started I closed windows file manger so the only window open was the miner window and nothing else.
  8. I ended the Chrome Remote Desktop session 
  9. I took all values from a separate computer on my network via the web interface
    • Note that castXMR does not display the values in a nice web format but the data is all there and accessible so it seemed like the best way to get “headless computer” performance.

What I did not do:  I did not use JJs Hash Monitor during the test (because I did not want to reset the Vega’s between test.  Thus, I did not reset the Vega’s between the two Monero mining sessions.  I did not re-apply OverdriveNTool parameters between the two sessions.

Results

The official result will be base on the average effective hash rate.  The effective hash rate is calculated by taking the mining software reported average hash rate and de-rating it by the percentage of shares rejected.  (For instance, for the 20 minute run presented in Figure 2 above, CastXMR reported a 95% yield).  Both miners are pointed at pool.supportXMR:7777 (ping time average = 15ms).

CastXMR Results

CastXMR ran for about 30 minutes before I polled the web interface and captured Figure 3:

Figure 3: CastXMR had an effective Moreno Hash Rate of 9498 h/s when accounting for lost shares

So, CastXMR displays an initial average rate of 9809 h/s but with 3.2% of it’s shares rejected and a 1.5% development fee, the resulting effective hash rate is 9809 x  96.8% x 98.5% = 9353.7 h/s yield
Note: While the web interface does not provide the reason for rejection beyond, “num_outdated”, observing the prior run (Figure 2 above) makes it likely the shares were rejected for: “Outdated because of Job change”.  While this may seem like a pool issue vs. a miner software issue, the fact is that I routinely use xmr-stak with supportxmr and get no such rejections… but they occur every time I run cast_xmr.  I conclude it is thus attributable to the software job handling and since I know of no knobs I can turn w.r.t. to this issue, I just have to accommodate it in the comparison via an “effective hash-rate”.

XMR-Stak Results:

Xmr-stak ran for about 30 minutes before I polled the web interface and captured Figure 4 and 5:

XMR-stak had an effective Monero Hash Rate of 9734 h/s when accounting for lost shares
Figure 4: XMR-stak had an effective Monero Hash Rate of 9734 h/s when accounting for lost shares
XMR-stak had zero lost shares during the 30 minute Monero mining session
Figure 5: XMR-stak had zero lost shares during the 30 minute mining session

So, XMR-stak has an initial average rate of 9734.7 h/s but with 100% yield the effective hash rate remains the full 9734.7 h/s yield.  XMR-Stak comes with a default 2% development fee so the effective yield reduces to 9540 h/s.

Conclusion

It has been shown that the hash rate values shown on the waterfall of numbers in the CastXMR screen are to be ignored in favor of CastXMR reported averages.

XMR-stak reports an average hash rate that is 99.2% of the CastXMR reported hash rate (about 15 h/s per Vega).  However, when comparing the effective yield after accounting for lost shares and development fee differences, XMR-stak yield is 2% better then CastXMR.  That is an improvement of about 185 h/s in favor of XMR-stak (nets about 35 h/s per Vega).

Endurance Test

I was concerned that perhaps my short test periods had cast CastXMR in an unfair light so I ran an extended 8hr test to see if the number of “Outdated” shares would settle out.  Unfortunately, the yield remained similar (a little worse).

There were 1082 shares submitted during the extended test period and Figure 6 shows that 41 shares were rejected as “outdated” (3.9% Waste).  CastXMR appeared to average 9813.8 h/s but once accounting for lost shares and development fee the effective yield reduces to 9289.6 h/s….  the corrected yield shows that stak-xmr once again beat CastXMR. The 2.7% margin in favor of stak-xmr results in a non-trivial 250 h/s on my 5 Vega system…  ~50 h/s per Vega.

Figure 6: CastXMR lost 3.9% of it’s effort to “Outdated” shares during the 8.3 hr endurance test

*12/2/2017 Update: I have completed an extended 8hr test of XMR-Stak (Results Here).  There were no invalid shares during the test so the analysis conclusion for my miner remains unchanged (I did get a slightly higher rate then that reported above due to an intensity change to 1908/1908 on the Vega that serves my HDMI dongle).

It is probably worth mentioning again what I said up front…  I recommend you view this post as a guide on the method you can use to make a performance assessment for your particular rig. Each rig is different. The analysis was a discussion of my method and my numbers…  YMMV.

Thanks to all who stuck with this number filled analytical post all the way to the bottom.  Not exactly a page turner but hopefully you found it to be fair, objective and helpful.  Please feel free to pass this along to anyone debating the pros and cons of the various mining software!

Leave a Reply

Your email address will not be published. Required fields are marked *