So testen Sie mittels Backtesting Ihre Trading Strategien

H1 Backtest of ParallaxFX's BBStoch system

Disclaimer: None of this is financial advice. I have no idea what I'm doing. Please do your own research or you will certainly lose money. I'm not a statistician, data scientist, well-seasoned trader, or anything else that would qualify me to make statements such as the below with any weight behind them. Take them for the incoherent ramblings that they are.
TL;DR at the bottom for those not interested in the details.
This is a bit of a novel, sorry about that. It was mostly for getting my own thoughts organized, but if even one person reads the whole thing I will feel incredibly accomplished.

Background

For those of you not familiar, please see the various threads on this trading system here. I can't take credit for this system, all glory goes to ParallaxFX!
I wanted to see how effective this system was at H1 for a couple of reasons: 1) My current broker is TD Ameritrade - their Forex minimum is a mini lot, and I don't feel comfortable enough yet with the risk to trade mini lots on the higher timeframes(i.e. wider pip swings) that ParallaxFX's system uses, so I wanted to see if I could scale it down. 2) I'm fairly impatient, so I don't like to wait days and days with my capital tied up just to see if a trade is going to win or lose.
This does mean it requires more active attention since you are checking for setups once an hour instead of once a day or every 4-6 hours, but the upside is that you trade more often this way so you end up winning or losing faster and moving onto the next trade. Spread does eat more of the trade this way, but I'll cover this in my data below - it ends up not being a problem.
I looked at data from 6/11 to 7/3 on all pairs with a reasonable spread(pairs listed at bottom above the TL;DR). So this represents about 3-4 weeks' worth of trading. I used mark(mid) price charts. Spreadsheet link is below for anyone that's interested.

System Details

I'm pretty much using ParallaxFX's system textbook, but since there are a few options in his writeups, I'll include all the discretionary points here:

And now for the fun. Results!

As you can see, a higher target ended up with higher profit despite a much lower winrate. This is partially just how things work out with profit targets in general, but there's an additional point to consider in our case: the spread. Since we are trading on a lower timeframe, there is less overall price movement and thus the spread takes up a much larger percentage of the trade than it would if you were trading H4, Daily or Weekly charts. You can see exactly how much it accounts for each trade in my spreadsheet if you're interested. TDA does not have the best spreads, so you could probably improve these results with another broker.
EDIT: I grabbed typical spreads from other brokers, and turns out while TDA is pretty competitive on majors, their minors/crosses are awful! IG beats them by 20-40% and Oanda beats them 30-60%! Using IG spreads for calculations increased profits considerably (another 5% on top) and Oanda spreads increased profits massively (another 15%!). Definitely going to be considering another broker than TDA for this strategy. Plus that'll allow me to trade micro-lots, so I can be more granular(and thus accurate) with my position sizing and compounding.

A Note on Spread

As you can see in the data, there were scenarios where the spread was 80% of the overall size of the trade(the size of the confirmation candle that you draw your fibonacci retracements over), which would obviously cut heavily into your profits.
Removing any trades where the spread is more than 50% of the trade width improved profits slightly without removing many trades, but this is almost certainly just coincidence on a small sample size. Going below 40% and even down to 30% starts to cut out a lot of trades for the less-common pairs, but doesn't actually change overall profits at all(~1% either way).
However, digging all the way down to 25% starts to really make some movement. Profit at the -161.8% TP level jumps up to 37.94% if you filter out anything with a spread that is more than 25% of the trade width! And this even keeps the sample size fairly large at 187 total trades.
You can get your profits all the way up to 48.43% at the -161.8% TP level if you filter all the way down to only trades where spread is less than 15% of the trade width, however your sample size gets much smaller at that point(108 trades) so I'm not sure I would trust that as being accurate in the long term.
Overall based on this data, I'm going to only take trades where the spread is less than 25% of the trade width. This may bias my trades more towards the majors, which would mean a lot more correlated trades as well(more on correlation below), but I think it is a reasonable precaution regardless.

Time of Day

Time of day had an interesting effect on trades. In a totally predictable fashion, a vast majority of setups occurred during the London and New York sessions: 5am-12pm Eastern. However, there was one outlier where there were many setups on the 11PM bar - and the winrate was about the same as the big hours in the London session. No idea why this hour in particular - anyone have any insight? That's smack in the middle of the Tokyo/Sydney overlap, not at the open or close of either.
On many of the hour slices I have a feeling I'm just dealing with small number statistics here since I didn't have a lot of data when breaking it down by individual hours. But here it is anyway - for all TP levels, these three things showed up(all in Eastern time):
I don't have any reason to think these timeframes would maintain this behavior over the long term. They're almost certainly meaningless. EDIT: When you de-dup highly correlated trades, the number of trades in these timeframes really drops, so from this data there is no reason to think these timeframes would be any different than any others in terms of winrate.
That being said, these time frames work out for me pretty well because I typically sleep 12am-7am Eastern time. So I automatically avoid the 5am-6am timeframe, and I'm awake for the majority of this system's setups.

Moving stops up to breakeven

This section goes against everything I know and have ever heard about trade management. Please someone find something wrong with my data. I'd love for someone to check my formulas, but I realize that's a pretty insane time commitment to ask of a bunch of strangers.
Anyways. What I found was that for these trades moving stops up...basically at all...actually reduced the overall profitability.
One of the data points I collected while charting was where the price retraced back to after hitting a certain milestone. i.e. once the price hit the -61.8% profit level, how far back did it retrace before hitting the -100% profit level(if at all)? And same goes for the -100% profit level - how far back did it retrace before hitting the -161.8% profit level(if at all)?
Well, some complex excel formulas later and here's what the results appear to be. Emphasis on appears because I honestly don't believe it. I must have done something wrong here, but I've gone over it a hundred times and I can't find anything out of place.
Now, you might think exactly what I did when looking at these numbers: oof, the spread killed us there right? Because even when you move your SL to 0%, you still end up paying the spread, so it's not truly "breakeven". And because we are trading on a lower timeframe, the spread can be pretty hefty right?
Well even when I manually modified the data so that the spread wasn't subtracted(i.e. "Breakeven" was truly +/- 0), things don't look a whole lot better, and still way worse than the passive trade management method of leaving your stops in place and letting it run. And that isn't even a realistic scenario because to adjust out the spread you'd have to move your stoploss inside the candle edge by at least the spread amount, meaning it would almost certainly be triggered more often than in the data I collected(which was purely based on the fib levels and mark price). Regardless, here are the numbers for that scenario:
From a literal standpoint, what I see behind this behavior is that 44 of the 69 breakeven trades(65%!) ended up being profitable to -100% after retracing deeply(but not to the original SL level), which greatly helped offset the purely losing trades better than the partial profit taken at -61.8%. And 36 went all the way back to -161.8% after a deep retracement without hitting the original SL. Anyone have any insight into this? Is this a problem with just not enough data? It seems like enough trades that a pattern should emerge, but again I'm no expert.
I also briefly looked at moving stops to other lower levels (78.6%, 61.8%, 50%, 38.2%, 23.6%), but that didn't improve things any. No hard data to share as I only took a quick look - and I still might have done something wrong overall.
The data is there to infer other strategies if anyone would like to dig in deep(more explanation on the spreadsheet below). I didn't do other combinations because the formulas got pretty complicated and I had already answered all the questions I was looking to answer.

2-Candle vs Confirmation Candle Stops

Another interesting point is that the original system has the SL level(for stop entries) just at the outer edge of the 2-candle pattern that makes up the system. Out of pure laziness, I set up my stops just based on the confirmation candle. And as it turns out, that is much a much better way to go about it.
Of the 60 purely losing trades, only 9 of them(15%) would go on to be winners with stops on the 2-candle formation. Certainly not enough to justify the extra loss and/or reduced profits you are exposing yourself to in every single other trade by setting a wider SL.
Oddly, in every single scenario where the wider stop did save the trade, it ended up going all the way to the -161.8% profit level. Still, not nearly worth it.

Correlated Trades

As I've said many times now, I'm really not qualified to be doing an analysis like this. This section in particular.
Looking at shared currency among the pairs traded, 74 of the trades are correlated. Quite a large group, but it makes sense considering the sort of moves we're looking for with this system.
This means you are opening yourself up to more risk if you were to trade on every signal since you are technically trading with the same underlying sentiment on each different pair. For example, GBP/USD and AUD/USD moving together almost certainly means it's due to USD moving both pairs, rather than GBP and AUD both moving the same size and direction coincidentally at the same time. So if you were to trade both signals, you would very likely win or lose both trades - meaning you are actually risking double what you'd normally risk(unless you halve both positions which can be a good option, and is discussed in ParallaxFX's posts and in various other places that go over pair correlation. I won't go into detail about those strategies here).
Interestingly though, 17 of those apparently correlated trades ended up with different wins/losses.
Also, looking only at trades that were correlated, winrate is 83%/70%/55% (for the three TP levels).
Does this give some indication that the same signal on multiple pairs means the signal is stronger? That there's some strong underlying sentiment driving it? Or is it just a matter of too small a sample size? The winrate isn't really much higher than the overall winrates, so that makes me doubt it is statistically significant.
One more funny tidbit: EUCAD netted the lowest overall winrate: 30% to even the -61.8% TP level on 10 trades. Seems like that is just a coincidence and not enough data, but dang that's a sucky losing streak.
EDIT: WOW I spent some time removing correlated trades manually and it changed the results quite a bit. Some thoughts on this below the results. These numbers also include the other "What I will trade" filters. I added a new worksheet to my data to show what I ended up picking.
To do this, I removed correlated trades - typically by choosing those whose spread had a lower % of the trade width since that's objective and something I can see ahead of time. Obviously I'd like to only keep the winning trades, but I won't know that during the trade. This did reduce the overall sample size down to a level that I wouldn't otherwise consider to be big enough, but since the results are generally consistent with the overall dataset, I'm not going to worry about it too much.
I may also use more discretionary methods(support/resistance, quality of indecision/confirmation candles, news/sentiment for the pairs involved, etc) to filter out correlated trades in the future. But as I've said before I'm going for a pretty mechanical system.
This brought the 3 TP levels and even the breakeven strategies much closer together in overall profit. It muted the profit from the high R:R strategies and boosted the profit from the low R:R strategies. This tells me pair correlation was skewing my data quite a bit, so I'm glad I dug in a little deeper. Fortunately my original conclusion to use the -161.8 TP level with static stops is still the winner by a good bit, so it doesn't end up changing my actions.
There were a few times where MANY (6-8) correlated pairs all came up at the same time, so it'd be a crapshoot to an extent. And the data showed this - often then won/lost together, but sometimes they did not. As an arbitrary rule, the more correlations, the more trades I did end up taking(and thus risking). For example if there were 3-5 correlations, I might take the 2 "best" trades given my criteria above. 5+ setups and I might take the best 3 trades, even if the pairs are somewhat correlated.
I have no true data to back this up, but to illustrate using one example: if AUD/JPY, AUD/USD, CAD/JPY, USD/CAD all set up at the same time (as they did, along with a few other pairs on 6/19/20 9:00 AM), can you really say that those are all the same underlying movement? There are correlations between the different correlations, and trying to filter for that seems rough. Although maybe this is a known thing, I'm still pretty green to Forex - someone please enlighten me if so! I might have to look into this more statistically, but it would be pretty complex to analyze quantitatively, so for now I'm going with my gut and just taking a few of the "best" trades out of the handful.
Overall, I'm really glad I went further on this. The boosting of the B/E strategies makes me trust my calculations on those more since they aren't so far from the passive management like they were with the raw data, and that really had me wondering what I did wrong.

What I will trade

Putting all this together, I am going to attempt to trade the following(demo for a bit to make sure I have the hang of it, then for keeps):
Looking at the data for these rules, test results are:
I'll be sure to let everyone know how it goes!

Other Technical Details

Raw Data

Here's the spreadsheet for anyone that'd like it. (EDIT: Updated some of the setups from the last few days that have fully played out now. I also noticed a few typos, but nothing major that would change the overall outcomes. Regardless, I am currently reviewing every trade to ensure they are accurate.UPDATE: Finally all done. Very few corrections, no change to results.)
I have some explanatory notes below to help everyone else understand the spiraled labyrinth of a mind that put the spreadsheet together.

Insanely detailed spreadsheet notes

For you real nerds out there. Here's an explanation of what each column means:

Pairs

  1. AUD/CAD
  2. AUD/CHF
  3. AUD/JPY
  4. AUD/NZD
  5. AUD/USD
  6. CAD/CHF
  7. CAD/JPY
  8. CHF/JPY
  9. EUAUD
  10. EUCAD
  11. EUCHF
  12. EUGBP
  13. EUJPY
  14. EUNZD
  15. EUUSD
  16. GBP/AUD
  17. GBP/CAD
  18. GBP/CHF
  19. GBP/JPY
  20. GBP/NZD
  21. GBP/USD
  22. NZD/CAD
  23. NZD/CHF
  24. NZD/JPY
  25. NZD/USD
  26. USD/CAD
  27. USD/CHF
  28. USD/JPY

TL;DR

Based on the reasonable rules I discovered in this backtest:

Demo Trading Results

Since this post, I started demo trading this system assuming a 5k capital base and risking ~1% per trade. I've added the details to my spreadsheet for anyone interested. The results are pretty similar to the backtest when you consider real-life conditions/timing are a bit different. I missed some trades due to life(work, out of the house, etc), so that brought my total # of trades and thus overall profit down, but the winrate is nearly identical. I also closed a few trades early due to various reasons(not liking the price action, seeing support/resistance emerge, etc).
A quick note is that TD's paper trade system fills at the mid price for both stop and limit orders, so I had to subtract the spread from the raw trade values to get the true profit/loss amount for each trade.
I'm heading out of town next week, then after that it'll be time to take this sucker live!

Live Trading Results

I started live-trading this system on 8/10, and almost immediately had a string of losses much longer than either my backtest or demo period. Murphy's law huh? Anyways, that has me spooked so I'm doing a longer backtest before I start risking more real money. It's going to take me a little while due to the volume of trades, but I'll likely make a new post once I feel comfortable with that and start live trading again.
submitted by ForexBorex to Forex [link] [comments]

2.5 years and 145 backtested trades later

I have a habit of backtesting every strategy I find as long as it makes sense. I find it fun, and even if the strategy ends up being underperforming, it gives me a good excuse to gain valuable chart experience that would normally take years to gather. After I backtest something, I compare it to my current methodology, and usually conclude that mine is better either because it has a better performance or the new method requires too much time to manage (Spoiler: until now, I like this better)
During the last two days, I have worked on backtesting ParallaxFx strategy, as it seemed promising and it seemed to fit my personality (a lazy fuck who will happily halve his yearly return if it means he can spend 10% less time in front of the screens). My backtesting is preliminary, and I didn't delve very deep in the data gathering. I usually track all sort of stuff, but for this first pass, I sticked to the main indicators of performance over a restricted sample size of markets.
Before I share my results with you, I always feel the need to make a preface that I know most people will ignore.
Strategy
I am not going to go into the strategy in this thread. If you haven't read the series of threads by the guy who shared it, go here.
As suggested by my mentioned personality type, I went with the passive management options of ParallaxFx's strategy. After a valid setup forms, I place two orders of half my risk. I add or remove 1 pip from each level to account for spread.
Sample
I tested this strategy over the seven major currency pairs: AUDUSD, USDCAD, NZDUSD, GBPUSD, USDJPY, EURUSD, USDCHF. The time period started on January 1th 2018 and ended on July 1th 2020, so a 2.5 years backtest. I tested over the D1 timeframe, and I plan on testing other timeframes.
My "protocol" for backtesting is that, if I like what I see during this phase, I will move to the second phase where I'll backtest over 5 years and 28 currency pairs.
Units of measure
I used R multiples to track my performance. If you don't know what they are, I'm too sleepy to explain right now. This article explains what they are. The gist is that the results you'll see do not take into consideration compounding and they normalize volatility (something pips don't do, and why pips are in my opinion a terrible unit of measure for performance) as well as percentage risk (you can attach variable risk profiles on your R values to optimize position sizing in order to maximize returns and minimize drawdowns, but I won't get into that).
Results
I am not going to link the spreadsheet directly, because it is in my GDrive folder and that would allow you to see my personal information. I will attach screenshots of both the results and the list of trades. In the latter, I have included the day of entry for each trade, so if you're up to the task, you can cross-reference all the trades I have placed to make sure I am not making things up.
Overall results: R Curve and Segmented performance.
List of trades: 1, 2, 3, 4, 5, 6, 7. Something to note: I treated every half position as an individual trade for the sake of simplicity. It should not mess with the results, but it simply means you will see huge streaks of wins and losses. This does not matter because I'm half risk in each of them, so a winstreak of 6 trades is just a winstreak of 3 trades.
For reference:
Thoughts
Nice. I'll keep testing. As of now it is vastly better than my current strategy.
submitted by Vanguer to Forex [link] [comments]

Trading economic news

The majority of this sub is focused on technical analysis. I regularly ridicule such "tea leaf readers" and advocate for trading based on fundamentals and economic news instead, so I figured I should take the time to write up something on how exactly you can trade economic news releases.
This post is long as balls so I won't be upset if you get bored and go back to your drooping dick patterns or whatever.

How economic news is released

First, it helps to know how economic news is compiled and released. Let's take Initial Jobless Claims, the number of initial claims for unemployment benefits around the United States from Sunday through Saturday. Initial in this context means the first claim for benefits made by an individual during a particular stretch of unemployment. The Initial Jobless Claims figure appears in the Department of Labor's Unemployment Insurance Weekly Claims Report, which compiles information from all of the per-state departments that report to the DOL during the week. A typical number is between 100k and 250k and it can vary quite significantly week-to-week.
The Unemployment Insurance Weekly Claims Report contains data that lags 5 days behind. For example, the Report issued on Thursday March 26th 2020 contained data about the week ending on Saturday March 21st 2020.
In the days leading up to the Report, financial companies will survey economists and run complicated mathematical models to forecast the upcoming Initial Jobless Claims figure. The results of surveyed experts is called the "consensus"; specific companies, experts, and websites will also provide their own forecasts. Different companies will release different consensuses. Usually they are pretty close (within 2-3k), but for last week's record-high Initial Jobless Claims the reported consensuses varied by up to 1M! In other words, there was essentially no consensus.
The Unemployment Insurance Weekly Claims Report is released each Thursday morning at exactly 8:30 AM ET. (On Thanksgiving the Report is released on Wednesday instead.) Media representatives gather at the Frances Perkins Building in Washington DC and are admitted to the "lockup" at 8:00 AM ET. In order to be admitted to the lockup you have to be a credentialed member of a media organization that has signed the DOL lockup agreement. The lockup room is small so there is a limited number of spots.
No phones are allowed. Reporters bring their laptops and connect to a local network; there is a master switch on the wall that prevents/enables Internet connectivity on this network. Once the doors are closed the Unemployment Insurance Weekly Claims Report is distributed, with a heading that announces it is "embargoed" (not to be released) prior to 8:30 AM. Reporters type up their analyses of the report, including extracting key figures like Initial Jobless Claims. They load their write-ups into their companies' software, which prepares to send it out as soon as Internet is enabled. At 8:30 AM the DOL representative in the room flips the wall switch and all of the laptops are connected to the Internet, releasing their write-ups to their companies and on to their companies' partners.
Many of those media companies have externally accessible APIs for distributing news. Media aggregators and squawk services (like RanSquawk and TradeTheNews) subscribe to all of these different APIs and then redistribute the key economic figures from the Report to their own subscribers within one second after Internet is enabled in the DOL lockup.
Some squawk services are text-based while others are audio-based. FinancialJuice.com provides a free audio squawk service; internally they have a paid subscription to a professional squawk service and they simply read out the latest headlines to their own listeners, subsidized by ads on the site. I've been using it for 4 months now and have been pretty happy. It usually lags behind the official release times by 1-2 seconds and occasionally they verbally flub the numbers or stutter and have to repeat, but you can't beat the price!
Important - I’m not affiliated with FinancialJuice and I’m not advocating that you use them over any other squawk. If you use them and they misspeak a number and you lose all your money don’t blame me. If anybody has any other free alternatives please share them!

How the news affects forex markets

Institutional forex traders subscribe to these squawk services and use custom software to consume the emerging data programmatically and then automatically initiate trades based on the perceived change to the fundamentals that the figures represent.
It's important to note that every institution will have "priced in" their own forecasted figures well in advance of an actual news release. Forecasts and consensuses all come out at different times in the days leading up to a news release, so by the time the news drops everybody is really only looking for an unexpected result. You can't really know what any given institution expects the value to be, but unless someone has inside information you can pretty much assume that the market has collectively priced in the experts' consensus. When the news comes out, institutions will trade based on the difference between the actual and their forecast.
Sometimes the news reflects a real change to the fundamentals with an economic effect that will change the demand for a currency, like an interest rate decision. However, in the case of the Initial Jobless Claims figure, which is a backwards-looking metric, trading is really just self-fulfilling speculation that market participants will buy dollars when unemployment is low and sell dollars when unemployment is high. Generally speaking, news that reflects a real economic shift has a bigger effect than news that only matters to speculators.
Massive and extremely fast news-based trades happen within tenths of a second on the ECNs on which institutional traders are participants. Over the next few seconds the resulting price changes trickle down to retail traders. Some economic news, like Non Farm Payroll Employment, has an effect that can last minutes to hours as "slow money" follows behind on the trend created by the "fast money". Other news, like Initial Jobless Claims, has a short impact that trails off within a couple minutes and is subsequently dwarfed by the usual pseudorandom movements in the market.
The bigger the difference between actual and consensus, the bigger the effect on any given currency pair. Since economic news releases generally relate to a single currency, the biggest and most easily predicted effects are seen on pairs where one currency is directly effected and the other is not affected at all. Personally I trade USD/JPY because the time difference between the US and Japan ensures that no news will be coming out of Japan at the same time that economic news is being released in the US.
Before deciding to trade any particular news release you should measure the historical correlation between the release (specifically, the difference between actual and consensus) and the resulting short-term change in the currency pair. Historical data for various news releases (along with historical consensus data) is readily available. You can pay to get it exported into Excel or whatever, or you can scroll through it for free on websites like TradingEconomics.com.
Let's look at two examples: Initial Jobless Claims and Non Farm Payroll Employment (NFP). I collected historical consensuses and actuals for these releases from January 2018 through the present, measured the "surprise" difference for each, and then correlated that to short-term changes in USD/JPY at the time of release using 5 second candles.
I omitted any releases that occurred simultaneously as another major release. For example, occasionally the monthly Initial Jobless Claims comes out at the exact same time as the monthly Balance of Trade figure, which is a more significant economic indicator and can be expected to dwarf the effect of the Unemployment Insurance Weekly Claims Report.
USD/JPY correlation with Initial Jobless Claims (2018 - present)
USD/JPY correlation with Non Farm Payrolls (2018 - present)
The horizontal axes on these charts is the duration (in seconds) after the news release over which correlation was calculated. The vertical axis is the Pearson correlation coefficient: +1 means that the change in USD/JPY over that duration was perfectly linearly correlated to the "surprise" in the releases; -1 means that the change in USD/JPY was perfectly linearly correlated but in the opposite direction, and 0 means that there is no correlation at all.
For Initial Jobless Claims you can see that for the first 30 seconds USD/JPY is strongly negatively correlated with the difference between consensus and actual jobless claims. That is, fewer-than-forecast jobless claims (fewer newly unemployed people than expected) strengthens the dollar and greater-than-forecast jobless claims (more newly unemployed people than expected) weakens the dollar. Correlation then trails off and changes to a moderate/weak positive correlation. I interpret this as algorithms "buying the dip" and vice versa, but I don't know for sure. From this chart it appears that you could profit by opening a trade for 15 seconds (duration with strongest correlation) that is long USD/JPY when Initial Jobless Claims is lower than the consensus and short USD/JPY when Initial Jobless Claims is higher than expected.
The chart for Non Farm Payroll looks very different. Correlation is positive (higher-than-expected payrolls strengthen the dollar and lower-than-expected payrolls weaken the dollar) and peaks at around 45 seconds, then slowly decreases as time goes on. This implies that price changes due to NFP are quite significant relative to background noise and "stick" even as normal fluctuations pick back up.
I wanted to show an example of what the USD/JPY S5 chart looks like when an "uncontested" (no other major simultaneously news release) Initial Jobless Claims and NFP drops, but unfortunately my broker's charts only go back a week. (I can pull historical data going back years through the API but to make it into a pretty chart would be a bit of work.) If anybody can get a 5-second chart of USD/JPY at March 19, 2020, UTC 12:30 and/or at February 7, 2020, UTC 13:30 let me know and I'll add it here.

Backtesting

So without too much effort we determined that (1) USD/JPY is strongly negatively correlated with the Initial Jobless Claims figure for the first 15 seconds after the release of the Unemployment Insurance Weekly Claims Report (when no other major news is being released) and also that (2) USD/JPY is strongly positively correlated with the Non Farms Payroll figure for the first 45 seconds after the release of the Employment Situation report.
Before you can assume you can profit off the news you have to backtest and consider three important parameters.
Entry speed: How quickly can you realistically enter the trade? The correlation performed above was measured from the exact moment the news was released, but realistically if you've got your finger on the trigger and your ear to the squawk it will take a few seconds to hit "Buy" or "Sell" and confirm. If 90% of the price move happens in the first second you're SOL. For back-testing purposes I assume a 5 second delay. In practice I use custom software that opens a trade with one click, and I can reliably enter a trade within 2-3 seconds after the news drops, using the FinancialJuice free squawk.
Minimum surprise: Should you trade every release or can you do better by only trading those with a big enough "surprise" factor? Backtesting will tell you whether being more selective is better long-term or not.
Hold time: The optimal time to hold the trade is not necessarily the same as the time of maximum correlation. That's a good starting point but it's not necessarily the best number. Backtesting each possible hold time will let you find the best one.
The spread: When you're only holding a position open for 30 seconds, the spread will kill you. The correlations performed above used the midpoint price, but in reality you have to buy at the ask and sell at the bid. Brokers aren't stupid and the moment volume on the ECN jumps they will widen the spread for their retail customers. The only way to determine if the news-driven price movements reliably overcome the spread is to backtest.
Stops: Personally I don't use stops, neither take-profit nor stop-loss, since I'm automatically closing the trade after a fixed (and very short) amount of time. Additionally, brokers have a minimum stop distance; the profits from scalping the news are so slim that even the nearest stops they allow will generally not get triggered.
I backtested trading these two news releases (since 2018), using a 5 second entry delay, real historical spreads, and no stops, cycling through different "surprise" thresholds and hold times to find the combination that returns the highest net profit. It's important to maximize net profit, not expected value per trade, so you don't over-optimize and reduce the total number of trades taken to one single profitable trade. If you want to get fancy you can set up a custom metric that combines number of trades, expected value, and drawdown into a single score to be maximized.
For the Initial Jobless Claims figure I found that the best combination is to hold trades open for 25 seconds (that is, open at 5 seconds elapsed and hold until 30 seconds elapsed) and only trade when the difference between consensus and actual is 7k or higher. That leads to 30 trades taken since 2018 and an expected return of... drumroll please... -0.0093 yen per unit per trade.
Yep, that's a loss of approx. $8.63 per lot.
Disappointing right? That's the spread and that's why you have to backtest. Even though the release of the Unemployment Insurance Weekly Claims Report has a strong correlation with movement in USD/JPY, it's simply not something that a retail trader can profit from.
Let's turn to the NFP. There I found that the best combination is to hold trades open for 75 seconds (that is, open at 5 seconds elapsed and hold until 80 seconds elapsed) and trade every single NFP (no minimum "surprise" threshold). That leads to 20 trades taken since 2018 and an expected return of... drumroll please... +0.1306 yen per unit per trade.
That's a profit of approx. $121.25 per lot. Not bad for 75 seconds of work! That's a +6% ROI at 50x leverage.

Make it real

If you want to do this for realsies, you need to run these numbers for all of the major economic news releases. Markit Manufacturing PMI, Factory Orders MoM, Trade Balance, PPI MoM, Export and Import Prices, Michigan Consumer Sentiment, Retail Sales MoM, Industrial Production MoM, you get the idea. You keep a list of all of the releases you want to trade, when they are released, and the ideal hold time and "surprise" threshold. A few minutes before the prescribed release time you open up your broker's software, turn on your squawk, maybe jot a few notes about consensuses and model forecasts, and get your finger on the button. At the moment you hear the release you open the trade in the correct direction, hold it (without looking at the chart!) for the required amount of time, then close it and go on with your day.
Some benefits of trading this way: * Most major economic releases come out at either 8:30 AM ET or 10:00 AM ET, and then you're done for the day. * It's easily backtestable. You can look back at the numbers and see exactly what to expect your return to be. * It's fun! Packing your trading into 30 seconds and knowing that institutions are moving billions of dollars around as fast as they can based on the exact same news you just read is thrilling. * You can wow your friends by saying things like "The St. Louis Fed had some interesting remarks on consumer spending in the latest Beige Book." * No crayons involved.
Some downsides: * It's tricky to be fast enough without writing custom software. Some broker software is very slow and requires multiple dialog boxes before a position is opened, which won't cut it. * The profits are very slim, you're not going to impress your instagram followers to join your expensive trade copying service with your 30-second twice-weekly trades. * Any friends you might wow with your boring-ass economic talking points are themselves the most boring people in the world.
I hope you enjoyed this long as fuck post and you give trading economic news a try!
submitted by thicc_dads_club to Forex [link] [comments]

I've reproduced 130+ research papers about "predicting the stock market", coded them from scratch and recorded the results. Here's what I've learnt.

ok, so firstly,
all of the papers I found through Google search and Google scholar. Google scholar doesn't actually have every research paper so you need to use both together to find them all. They were all found by using phrases like "predict stock market" or "predict forex" or "predict bitcoin" and terms related to those.

Next,
I only tested papers written in the past 8 years or so, I think anything older is just going to be heavily Alpha-mined so we can probably just ignore those ones altogether.

Then,
Anything where it's slightly ambiguous with methodology, I tried every possible permutation to try and capture what the authors may have meant. For example, one paper adds engineered features to the price then says "then we ran the data through our model" - it's not clear if it means the original data or the engineered data, so I tried both ways. This happens more than you'd think!

THEN,
Anything that didn't work, I tried my own ideas with the data they were using or substituted one of their models with others that I knew of.

Now before we go any further, I should caveat that I was a profitable trader at multiple Tier-1 US banks so I can say with confidence that I made a decent attempt of building whatever the author was trying to get at.

Oh, and one more thing. All of this work took about 7 months in total.

Right, let's jump in.

So with the papers, I found as many as I could, then I read through them and put them in categories and then tested each category at a time because a lot of papers were kinda saying the same things.
Here are the categories:
Results:
Literally every single paper was either p-hacked, overfit, or a subsample of favourable data was selected (I guess ultimately they're all the same thing but still) OR a few may have had a smidge of Alpha but as soon as you add transaction costs it all disappears.
Every author that's been publicly challenged about the results of their paper says it's stopped working due to "Alpha decay" because they made their methodology public. The easiest way to test whether it was truly Alpha decay or just overfitting by the authors is just to reproduce the paper then go further back in time instead of further forwards. For the papers that I could reproduce, all of them failed regardless of whether you go back or forwards. :)

Now, results from the two most popular categories were:

The most frustrating paper:
I have true hate for the authors of this paper: "A deep learning framework for financial time series using stacked autoencoders and long-short term memory". Probably the most complex AND vague in terms of methodology and after weeks trying to reproduce their results (and failing) I figured out that they were leaking future data into their training set (this also happens more than you'd think).

The two positive take-aways that I did find from all of this research are:
  1. Almost every instrument is mean-reverting on short timelines and trending on longer timelines. This has held true across most of the data that I tested. Putting this information into a strategy would be rather easy and straightforward (although you have no guarantee that it'll continue to work in future).
  2. When we were in the depths of the great recession, almost every signal was bearish (seeking alpha contributors, news, google trends). If this holds in the next recession, just using this data alone would give you a strategy that vastly outperforms the index across long time periods.
Hopefully if anyone is getting into this space this will save you an absolute tonne of time and effort.
So in conclusion, if you're building trading strategies. Simple is good :)

Also one other thing I'd like to add, even the Godfather of value investing, the late Benjamin Graham (Warren Buffet's mentor) used to test his strategies (even though he'd be trading manually) so literally every investor needs to backtest regardless of if you're day-trading or long-term investing or building trading algorithms.
submitted by chiefkul to StockMarket [link] [comments]

The best crypto trading bot platform now has a free plan!

What is CLEO.one? CLEO.one, brings powerful, well informed trading automation to independent traders that don’t want to spend time on coding, but need to be present in the markets 24/7, with perfect execution is now free to use when trading on Binance! Strategies are created through simple typing. They can be tested for crypto, forex and stocks, deployed on live trading as crypto bots or paper traded and demoed on real time market conditions. We support the biggest crypto exchanges.
Can I create a grid/dca/specific type of bot? You can create any type of bot you please. The level of flexibility should accommodate any style of trading.
What makes CLEO.one different?
CLEO.one contains more data than any other platform and it can be combined in infinite ways to allow traders to craft any strategy they have in mind. Price action, technical indicators, crypto fundamentals, candlestick patterns, market caps, dominance correlation with other assets – all out of the box.
Trading results are packed with clarity and statistics. This helps you advance your trading by being able to zoom in on any detail, even if you are trading many strategies. CLEO.one lets you test your trading strategies, no matter if they are simple or complex in minutes. Historical data runs back 50 years on the assets that have that much history. You can then automate your trading, or demo your strategies on papertrading.
The first platform that works for crypto, forex and stock traders, allowing them to shrink their strategy creation time by doing it all through simple typing. More data than anywhere else on the web and backtesting so easy that anyone can do it. Independent traders finally get radically better crypto bots and sophistication through simplicity for any asset that they dabble in.
In case you are still trading without a trading strategy, you might find it hard to improve your actions or improve your trading results. CLEO.one features free strategies, all profitable when historically tested that you can modify or straight up trade.
What can I do in CLEO.one? • Create crypto, forex or equities strategies through simple typing • Backtest trading strategies for crypto, forex and equities • Crypto strategies can be automated on the exchange of choice as crypto bots • Place trades with simultaneous Trailing Take Profit and Trailing Stop Loss • Papertrade to test out strategies in current market conditions • Use free, profitable when tested strategies
Who is CLEO.one for? CLEO.one is easy to use and approachable even for traders that are starting out. Under the hood it has more than enough power to satisfy even the most experienced omni-asset traders. • Crypto traders that want to create, test or automate their trading • Forex traders that want to test or papertrade their strategies • Stock traders that want sophisticated asset selection
Who owns my strategy? You do, as stated in our Terms & conditions . Unless it is something super common like “when RSI is above 30.” The algorithm is in CLEO.one and we have permission to run it though our Services. The full Terms & conditions can be found here and are available on every page of the site at the bottom.
How do I get help? - We do free onboarding calls! If you’d like to set up something specific or have a walkthrough we would love to help! - Our responsive staff will answer any question you might have – reach out via chat on CLEO.one. - The CLEO.one helpdesk is always available and growing.
So is it really for free? When trading via Binance it is 100% free. Our subscription plans of €249, €149, and €69 apply only when you do not connect a Binance account. You do need to fulfill 2 conditions for the Binance account: 1. Needs to be created after July 21, 2020 2. Cannot be created using a referral code That’s it! In case you need to create a new account feel free to - no KYC.
You probably still have questions…
Can I make money with your bot? We do not sell a bot, but help you work on your strategies and automate the best. Or place one-off trades with simultaneous (trailing) stop loss and take profit. You become a better trader, you don’t have to rely on shady signals, you get to achieve your long-term trading goals. We do feature strategies that are all tested when profitable and you are free to test them, change them or straight up trade them.
Is it safe? You never transfer any funds to us, everything stays on the exchange.
Do I have to link and account to try the platform? No, we have a freemium version that lets you create strategies and backtest them.
You can find the details here or check out the offer. Thank you! We're happy to help with anything.
submitted by CLEOone to CLEOone [link] [comments]

I've reproduced 130+ research papers about "predicting bitcoin", coded them from scratch and recorded the results. Here's what I've learnt.

ok, so firstly,
all of the papers I found through Google search and Google scholar. Google scholar doesn't actually have every research paper so you need to use both together to find them all. They were all found by using phrases like "predict bitcoin" or "predict stock market" or "predict forex" and terms related to those.

Next,
I only tested papers written in the past 8 years or so, I think anything older is just going to be heavily Alpha-mined so we can probably just ignore those ones altogether.

Then,
Anything where it's slightly ambiguous with methodology, I tried every possible permutation to try and capture what the authors may have meant. For example, one paper adds engineered features to the price then says "then we ran the data through our model" - it's not clear if it means the original data or the engineered data, so I tried both ways. This happens more than you'd think!

THEN,
Anything that didn't work, I tried my own ideas with the data they were using or substituted one of their models with others that I knew of.

Now before we go any further, I should caveat that I was a profitable trader at multiple Tier-1 US banks so I can say with confidence that I made a decent attempt of building whatever the author was trying to get at.

Oh, and one more thing. All of this work took about 7 months in total.

Right, let's jump in.

So with the papers, I found as many as I could, then I read through them and put them in categories and then tested each category at a time because a lot of papers were kinda saying the same things.

Here are the categories:

Results:
Literally every single paper was either p-hacked, overfit, or a subsample of favourable data was selected (I guess ultimately they're all the same thing but still) OR a few may have had a smidge of Alpha but as soon as you add transaction costs it all disappears.

Every author that's been publicly challenged about the results of their paper says it's stopped working due to "Alpha decay" because they made their methodology public. The easiest way to test whether it was truly Alpha decay or just overfitting by the authors is just to reproduce the paper then go further back in time instead of further forwards. For the papers that I could reproduce, all of them failed regardless of whether you go back or forwards. :)

Now, results from the two most popular categories were:

The most frustrating paper:
I have true hate for the authors of this paper: "A deep learning framework for financial time series using stacked autoencoders and long-short term memory". Probably the most complex AND vague in terms of methodology and after weeks trying to reproduce their results (and failing) I figured out that they were leaking future data into their training set (this also happens more than you'd think).

The two positive take-aways that I did find from all of this research are:
  1. Almost every instrument is mean-reverting on short timelines and trending on longer timelines. This has held true across most of the data that I tested. Putting this information into a strategy would be rather easy and straightforward (although you have no guarantee that it'll continue to work in future).
  2. When we were in the depths of the great recession, almost every signal was bearish (seeking alpha contributors, news, google trends). If this holds in the next recession, just using this data alone would give you a strategy that vastly outperforms the index across long time periods.

Hopefully if anyone is getting into this space this will save you an absolute tonne of time and effort.

So in conclusion, if you're building trading strategies, simple is good :)

Also one other thing I'd like to add, even the Godfather of value investing, the late Benjamin Graham (Warren Buffet's mentor) used to test his strategies (even though he'd be trading manually) so literally every investor needs to backtest regardless of if you're day-trading or long-term investing or building trading algorithms.


EDIT: in case anyone wants to read more from me I occasionally write on medium (even though I'm not a good writer)
submitted by chiefkul to CryptoCurrency [link] [comments]

Good and better ways of algorithmic trading strategy development and management.

Hi everyone! After spending about a year learning algotrading (and trading overall - mostly Forex) I've came to a few conlusions that I'd like to discuss with more experienced algotraders.
From what I've seen the most popular approach for TA based strategy development looks like that: get some data, throw some TA tools on it, look for patterns, optimize, backtest, repeat. With some effort it is usually possible to find a strategy that will perform quite well in backtests over some limited period of time. Out of sample testing may show that such strategy is profitable during particular market conditions.
With this approach it is guarranted that the strategy will stop working once market condtions change (and it will inevitably happen). It is obviously possible to be profitable with this approach, but one would have to constantly discover new strategies and update old ones.
Don't you think that instead of working on particular strategies one should be more concerned about developing a global framework for managing strategies, reoptimizing them, discovering new ones and discarding the obsolete? This may sound cliche but I think that such approach may be much more productive in the long run.
submitted by MaishyN to algotrading [link] [comments]

Simplicity or Complexity: Which road to take?

I am fairly new to all of this, I do not do automated trading itself, but use backtesting to study the markets. I am working with 1m Forex data. Running my backtests over about 3000 "single week long segments" ranging across 24 currencies and 174 weeks. Got my data from FXCM and resampled it. So far I have tested trend following, RSI, Bollinger Bands, various combinations of these three, while accounting for stop losses, margin calls and average 2pip spread on each trade. But the best result I have gotten is a 0.01% yield, which is just noise. When I produce overall losing strategies in backtests they also do not go lower than -0.03%. So it seems that indicators predict markets randomly and you end up losing as much as you make (to be fair they do yield ~1% returns with 20x leverage, if you assume that you will never get margin called).
So I am thinking of changing my approach, since simple indicator based strategies seem to result in 0.0% returns overall. Therefore, I will be going into testing of complex strategies.
I care about what other people think: Is seeking more complex strategies a rabbit hole that I will never come back from? It seems like there is not limit to the amount of elements you can pile on.
Is sticking to simple strategies using 1-3 indicators in combination and searching for something that will work a safer path?
Should I give up on this forex thing and be realistic?
Any input and opinions will be appreciated, you do not have to share trade secrets, thanks.
submitted by barumal to algotrading [link] [comments]

Forex Trend Trading - The Trend is Your Friend

Electronic currency trading is fast becoming Forex Millennium Review a widely popular forex investment venture. This is where you use the Internet and a few software applications to go about your daily forex data providers are hooked up to an electronic forex trading platform. These providers send out forex data including historical foreign exchange information good for forex backtesting, alerts, signals and news.

There are computer applications which can aid you in your trades. These have preconfigured systems which handle trade decisions and predictions based on its updated database of current forex information sent by the electronic currency trading platform itself or any of the forex data providers in its list. The built-in systems of these computer programs are also designed to interact with the decisions, trading styles and predictions of beta users, prioritizing stored user data with the least percentage of trade losses in comparison to its current forex database information. Also, most of these beta testers are popular forex specialists and investment advisers.

To profit from your ECurrency trading ventures, you need to identify the best platform to use. Ask around for advice from your friends and colleagues with knowledge and experience in electronic forex trading.Also consult reputable sources of information about these platforms and software applications. These can include popular forex specialists, finance advisers and investment consultants.

Make sure that your computer is free from malicious programs which can steal your private information. This can be a bigger problem, especially if your forex trading applications and the platforms where you go about your daily trades are compromised. You may not even know that your computer is already sending out confidential data, related to your forex trading ventures or otherwise, to predesignated servers, all while you're using these electronic currency trading platforms.

https://discountdevotee.com/forex-millennium-review/
submitted by adamssmith8754 to u/adamssmith8754 [link] [comments]

Looking for a good starting point - does MultiCharts fit the bill?

I'm fairly new to trading, coding, and finance, but I've begun developing a strategy that I'd like to flesh out. I've done some manual backtests and they seem promising, but I don't have a big enough sample size to really feel confident about it yet. I figured that the best thing to do is to develop an algorithm that I can backtest with a lot of data, but I'm feeling a bit unsure where to start presented with all these different options combined with my inexperience.
I've been using QuantConnect a bit, but I'm not thrilled with the fact that certain staff members can view my code, so I want to switch to something that won't have that issue. I was originally looking at Python platforms, but I've heard that it executes too slowly and I should be looking at building my algorithm in C# or C++. I'm not looking to execute on tick data as of now, and the lowest time frame I can anticipate going is 1 minute, although more likely it's 5 minutes. Will the speed issue come into play on those time frames? I'd also like to get into machine learning eventually and as far as I know Python is better for that so seems like there may be a bit of a trade off...that said, I'd imagine there's a way to integrate Python machine learning with C# or C++, but perhaps that's my ignorance speaking.
Someone on here recommended using MultiCharts - does this seem like a good starting point? I'm worried about getting too deep into developing my own platform and MultiCharts seems fairly user friendly as far as having a GUI and robust platform. I'd really like to just get straight into developing and testing my strategy, as that's what got me interested in algotrading to begin with.
Oh, the strategy is based on forex, if that's helpful. Thanks in advance for any input!
submitted by prokcomp to algotrading [link] [comments]

Start 2: 8th Failed Attempt and going for the 9th

In my previous post, I started my ventures to make some money. So here's my progress:

Income Stream No. 1: Forex

The system failed and kinda blew my $100 deposit on it. What I've learnt is that:
Did more reading and found that there's a better way to test, build and test again before I can begin selling signal subscription in the MQL market. My next step is to try and implement a Kumo Breakout with IKH. I'm keeping my strategies as simple as possible while the success metrics is that its self-sustainable on its own with less of my own intervention. I can simply test without having to go with a broker first to get my strategy working with quality data (i.e. without renting a VPS and downloading all of the data to my machine for backtesting). By utilising tickstory.com, I can just download all of the Dukascopy tick data and then back test from there. Of course, this is going to be different if I were to trade with a broker account because spreads. Will adjust later but for now, I have to focus on building my signal service.

Income Stream No. 2: Amazon

I have to cancel this and won't follow-through. Instead I'm moving towards Income Stream No. 3 instead. Problem was that I didn't find the time to contact suppliers and ask for rebranding or customization on interested items that I would to sell and resell on Amazon FBA. Family came first so... yeah...

Income Stream No. 3: SaaS

Good news, it's almost ready. Building a multi-tenant app with Django was very tasking on my time especially with the setup. I've had bumps in the past month with my machine not being able to load the configuration set up for my Postgresql Database. I was trying to build a High Availability Cluster set up but it took too long, so I'll have to build that part later. Deploying on AWS is harder than I thought though. Damn security groups, couldn't load properly. I guess I have to hit up Youtube tutorials on AWS for being such a noob.
Bad news, is whats pending at the moment an activated Stripe account. I'm still waiting for my LLC to go through its approval process (registered with a Delaware agent) and should come at any time soon. Once I have an LLC registered, I should be able to apply for an EIN at the same time. So until that happens, I'll need to quickly build my SaaS app with a test Stripe account until it's MVP ready.

In the End

If at first you don't succeed, try again after you learn your mistake(s). Even if it means blowing your deposit and sacrificing sleep!
submitted by nosepickingexpert to juststart [link] [comments]

Algorithmic Trading Strategy for Forex (EUR/USD)

Dear Reddit,
We are a team of three people that have developed an algorithmic trading strategy for Forex during the beginning of this year.
It is coded in .NET and the broker we use is Interactive Brokers (API). Strategy is simple and trades after technical indicators, but highly optimized.
The algorithms works very well and has already given us a return of 30% without margin during three weeks of live trading. We did get good results when we backtested the algorithms too (6 months), but we get even better results during live trading. We have developed our own backtesting software with our own recorded data. The strategy is only trading EUUSD right now because this currency pair has the highest amount of liquidity in the market, and our robot needs liquidity. We hold positions from 60 ms up to 1 hour.
We are trading Live with our own money right now. But we are very interested to get in touch with people in this area to continue to develop this strategy, attract capital or collaborate with other people in this area.
We often get the question: "Is it really possible to make money with algorithmic trading?" Answer: Yes you can, but it involves a lot of work, experience and sense of how markets work. This strategy for example should only be used when the markets conditions are "right" for this strategy and that is; High liquidity in the FX markets, low volatility and no major news events or volatile stock markets that can have spill over effects on the FX market. Otherwise big unexpectedly moves in this currency pair will occur more often and and this will result in unnecessary losses.
Which boards or communities are best for this kind of things?
Please let us know. If you have any questions about our strategy or anything else, feel free to ask :)
Thanks.
submitted by AlgoFX to algotrading [link] [comments]

Technical Analysis Weekly Review: 6. A Trading Plan, Part 1

Technical Analysis Weekly Review by ClydeMachine

Previous Week's Post:
5. Momentum & Volatility
This Week:
6. A Trading Plan, Part 1
Next Week's Post:
7. A Trading Plan, Part 2

6. A Trading Plan TL;DR


6. A Trading Plan

So you've been following TAWR for the last month - what does your trading plan look like? If you haven't started one yet, that's okay - that's what we start to cover in this week's post. First, you need to do a little soul searching.

Is this the right market for you to trade in?

Unlike other markets, the Bitcoin market does not close, not even on weekends. (International exchanges are for the most part open 6 days out of 7. BTC is around the clock.) This means there is constantly something happening, something to be watching for. Obviously you needn't be watching charts all the time and losing sleep and cuddle time because of a possible overseas news bit making waves - but this does open the market up for a lot of activity and this can be a serious stressor. If this will be too much for you, don't worry! This isn't the only market you can trade in. If this is a serious concern for you, consider other markets on the Forex. There are plenty of currency pairs to trade in that aren't nearly as crazy as those involving XBTs.
...If you're still here and not looking up USD/CHF market behaviour, that must mean you like rollercoasters.

Type of Trader: Being Honest With Yourself

Are you a swing trader? Long-term buy-and-holder looking to make a little extra in the short-term? Just curious what it's like to do what a daytrader does? Answering the question of "what type of trader are you" is important when setting up a trading plan, because certain indicators are better suited to different styles of trading. Your trading style will not necessarily reflect mine. Yours will likely differ a lot from mine and everyone else' - but as long as you can make decisions based off of that plan, and they make you money when followed, it is a good trading plan.
Ultimately, the goal of answering that question isn't to give yourself a label, it's to find a set of technical rules that you can follow that 1) make you money, and 2) that you can actually act on. Trader indecisiveness is a serious problem when on the (digital) trading floor. If you have a killer plan that seems like it'll work well for you based on the backtesting, but you find that you can't actually decide when to enter and exit a position because it's reacting very sensitive to market movements, that's trader indecisiveness. Suppose it's not reactive enough and you miss entry points every time they pass? That's also trader indecision. If you can take action based on the indicators, and make money as a result, that's a good plan. If not, go ahead and make revisions to the plan. Identify what's causing your money to disappear into fees and other traders' pockets, and make changes to keep that from happening!
I mentioned backtesting. That's important because whenever you come up with (or change) a trading plan, you need to...

PAPER TRADE FIRST.

If you aren't making money on paper, why would you make money in the market?
To paper trade, take down your actions based on your prospective trading plan, using actual market data. Follow the market and see if your trades would have made money if you had actually executed them on the market. If you're making satisfying gains consistently on those trades based on the rules of your plan, you can have confidence in your trading plan. If you're losing money or just barely breaking even, consider revisions to your trading plan. You can use historical data to check your plan's profitability, since it's readily available. Bitcoincharts.com and Tradingview.com both let you see historical data from the Bitcoin market, for example.
Obviously this will not be terribly useful to you until you've built your plan, but if you've already started to play with some indicators just to get a feel of how they look and react with the data, you'll find those two links somewhat helpful in getting a jump on next week's post.

Stick to the Facts.

Maybe your gut has never done you wrong, but always follow the chart. Befriend the trend. Trust the chart. Facts don't lie. Evidence doesn't lie. Make money by going with the market, not against it, no matter what your emotions or feelings are telling you.
This is something I've been guilty of, because the fact is I love Bitcoin. I really do. I love its functionality, its widespread growth, and the fact that it's techy at its decentralized heart. (That's a paradox, by the way.) But when a trader gets too involved with their chosen security, they believe in it for the wrong reasons. As much as I love Bitcoin, I have to sell it if the price goes into a mad nosedive. If you believe in the long-term success of Bitcoin, cool - know why you believe in it. Otherwise, just trade it and don't get too attached to it.
One of the key differences between Bitcoin and traditional stocks are that stocks are not food or clothes - you can't eat or wear stocks, so selling them is how you make money (locking in profits vs making gains "on paper"). However, Bitcoin actually does have use. It can be spent like any other currency (except faster!) and therefore having a lot of this security actually does give you a function you might not otherwise have. All the same, decide just how close you want to be to Bitcoin. If you believe it'll always and forever have a value, and will increase in value over time no matter what, then go ahead and collect as many as you can afford. If you have your cautious doubts, be aware of the previous point about getting too close to the security, and trade it like any other stock.
It's all about making money, whether you measure your monetary gains in USD or XBTs.
This next segment is right out of Barbara Rockefeller's "Technical Analysis for Dummies, 2nd ed." book, and is always true whether you're into cryptocurrencies or traditional stocks.

Diversify

"Diversification reduces risk. The proof of the concept in financial math won its proponents the Nobel prize, but the old adage has been around for centuries: “Don’t put all your eggs in one basket.” In technical trading, diversification applies in two places:

Deciding on Indicators

Wait til next week and we'll go over those! We'll see which ones fit with faster or slower trading plans (both are useful in Bitcoin) and you get to branch off from there and build your plan accordingly.

Next Week:

I'll welcome redditors to either comment or PM me their trading plans I'll do my best to look them over and offer suggestions or warnings as I see them. Again, I'm no guru or all-knowing being, and I'm not a certified trader or money manager or anything of that nature - but I'll offer the benefit of my research over the last few months regarding the indicators we've covered.
Stay curious, make money, have fun and see you next week.
submitted by ClydeMachine to BitcoinMarkets [link] [comments]

Subreddit Stats: cs7646_fall2017 top posts from 2017-08-23 to 2017-12-10 22:43 PDT

Period: 108.98 days
Submissions Comments
Total 999 10425
Rate (per day) 9.17 95.73
Unique Redditors 361 695
Combined Score 4162 17424

Top Submitters' Top Submissions

  1. 296 points, 24 submissions: tuckerbalch
    1. Project 2 Megathread (optimize_something) (33 points, 475 comments)
    2. project 3 megathread (assess_learners) (27 points, 1130 comments)
    3. For online students: Participation check #2 (23 points, 47 comments)
    4. ML / Data Scientist internship and full time job opportunities (20 points, 36 comments)
    5. Advance information on Project 3 (19 points, 22 comments)
    6. participation check #3 (19 points, 29 comments)
    7. manual_strategy project megathread (17 points, 825 comments)
    8. project 4 megathread (defeat_learners) (15 points, 209 comments)
    9. project 5 megathread (marketsim) (15 points, 484 comments)
    10. QLearning Robot project megathread (12 points, 691 comments)
  2. 278 points, 17 submissions: davebyrd
    1. A little more on Pandas indexing/slicing ([] vs ix vs iloc vs loc) and numpy shapes (37 points, 10 comments)
    2. Project 1 Megathread (assess_portfolio) (34 points, 466 comments)
    3. marketsim grades are up (25 points, 28 comments)
    4. Midterm stats (24 points, 32 comments)
    5. Welcome to CS 7646 MLT! (23 points, 132 comments)
    6. How to interact with TAs, discuss grades, performance, request exceptions... (18 points, 31 comments)
    7. assess_portfolio grades have been released (18 points, 34 comments)
    8. Midterm grades posted to T-Square (15 points, 30 comments)
    9. Removed posts (15 points, 2 comments)
    10. assess_portfolio IMPORTANT README: about sample frequency (13 points, 26 comments)
  3. 118 points, 17 submissions: yokh_cs7646
    1. Exam 2 Information (39 points, 40 comments)
    2. Reformat Assignment Pages? (14 points, 2 comments)
    3. What did the real-life Michael Burry have to say? (13 points, 2 comments)
    4. PSA: Read the Rubric carefully and ahead-of-time (8 points, 15 comments)
    5. How do I know that I'm correct and not just lucky? (7 points, 31 comments)
    6. ML Papers and News (7 points, 5 comments)
    7. What are "question pools"? (6 points, 4 comments)
    8. Explanation of "Regression" (5 points, 5 comments)
    9. GT Github taking FOREVER to push to..? (4 points, 14 comments)
    10. Dead links on the course wiki (3 points, 2 comments)
  4. 67 points, 13 submissions: harshsikka123
    1. To all those struggling, some words of courage! (20 points, 18 comments)
    2. Just got locked out of my apartment, am submitting from a stairwell (19 points, 12 comments)
    3. Thoroughly enjoying the lectures, some of the best I've seen! (13 points, 13 comments)
    4. Just for reference, how long did Assignment 1 take you all to implement? (3 points, 31 comments)
    5. Grade_Learners Taking about 7 seconds on Buffet vs 5 on Local, is this acceptable if all tests are passing? (2 points, 2 comments)
    6. Is anyone running into the Runtime Error, Invalid DISPLAY variable when trying to save the figures as pdfs to the Buffet servers? (2 points, 9 comments)
    7. Still not seeing an ML4T onboarding test on ProctorTrack (2 points, 10 comments)
    8. Any news on when Optimize_Something grades will be released? (1 point, 1 comment)
    9. Baglearner RMSE and leaf size? (1 point, 2 comments)
    10. My results are oh so slightly off, any thoughts? (1 point, 11 comments)
  5. 63 points, 10 submissions: htrajan
    1. Sample test case: missing data (22 points, 36 comments)
    2. Optimize_something test cases (13 points, 22 comments)
    3. Met Burt Malkiel today (6 points, 1 comment)
    4. Heads up: Dataframe.std != np.std (5 points, 5 comments)
    5. optimize_something: graph (5 points, 29 comments)
    6. Schedule still reflecting shortened summer timeframe? (4 points, 3 comments)
    7. Quick clarification about InsaneLearner (3 points, 8 comments)
    8. Test cases using rfr? (3 points, 5 comments)
    9. Input format of rfr (2 points, 1 comment)
    10. [Shameless recruiting post] Wealthfront is hiring! (0 points, 9 comments)
  6. 62 points, 7 submissions: swamijay
    1. defeat_learner test case (34 points, 38 comments)
    2. Project 3 test cases (15 points, 27 comments)
    3. Defeat_Learner - related questions (6 points, 9 comments)
    4. Options risk/reward (2 points, 0 comments)
    5. manual strategy - you must remain in the position for 21 trading days. (2 points, 9 comments)
    6. standardizing values (2 points, 0 comments)
    7. technical indicators - period for moving averages, or anything that looks past n days (1 point, 3 comments)
  7. 61 points, 9 submissions: gatech-raleighite
    1. Protip: Better reddit search (22 points, 9 comments)
    2. Helpful numpy array cheat sheet (16 points, 10 comments)
    3. In your experience Professor, Mr. Byrd, which strategy is "best" for trading ? (12 points, 10 comments)
    4. Industrial strength or mature versions of the assignments ? (4 points, 2 comments)
    5. What is the correct (faster) way of doing this bit of pandas code (updating multiple slice values) (2 points, 10 comments)
    6. What is the correct (pythonesque?) way to select 60% of rows ? (2 points, 11 comments)
    7. How to get adjusted close price for funds not publicly traded (TSP) ? (1 point, 2 comments)
    8. Is there a way to only test one or 2 of the learners using grade_learners.py ? (1 point, 10 comments)
    9. OMS CS Digital Career Seminar Series - Scott Leitstein recording available online? (1 point, 4 comments)
  8. 60 points, 2 submissions: reyallan
    1. [Project Questions] Unit Tests for assess_portfolio assignment (58 points, 52 comments)
    2. Financial data, technical indicators and live trading (2 points, 8 comments)
  9. 59 points, 12 submissions: dyllll
    1. Please upvote helpful posts and other advice. (26 points, 1 comment)
    2. Books to further study in trading with machine learning? (14 points, 9 comments)
    3. Is Q-Learning the best reinforcement learning method for stock trading? (4 points, 4 comments)
    4. Any way to download the lessons? (3 points, 4 comments)
    5. Can a TA please contact me? (2 points, 7 comments)
    6. Is the vectorization code from the youtube video available to us? (2 points, 2 comments)
    7. Position of webcam (2 points, 15 comments)
    8. Question about assignment one (2 points, 5 comments)
    9. Are udacity quizzes recorded? (1 point, 2 comments)
    10. Does normalization of indicators matter in a Q-Learner? (1 point, 7 comments)
  10. 56 points, 2 submissions: jan-laszlo
    1. Proper git workflow (43 points, 19 comments)
    2. Adding you SSH key for password-less access to remote hosts (13 points, 7 comments)
  11. 53 points, 1 submission: agifft3_omscs
    1. [Project Questions] Unit Tests for optimize_something assignment (53 points, 94 comments)
  12. 50 points, 16 submissions: BNielson
    1. Regression Trees (7 points, 9 comments)
    2. Two Interpretations of RFR are leading to two different possible Sharpe Ratios -- Need Instructor clarification ASAP (5 points, 3 comments)
    3. PYTHONPATH=../:. python grade_analysis.py (4 points, 7 comments)
    4. Running on Windows and PyCharm (4 points, 4 comments)
    5. Studying for the midterm: python questions (4 points, 0 comments)
    6. Assess Learners Grader (3 points, 2 comments)
    7. Manual Strategy Grade (3 points, 2 comments)
    8. Rewards in Q Learning (3 points, 3 comments)
    9. SSH/Putty on Windows (3 points, 4 comments)
    10. Slight contradiction on ProctorTrack Exam (3 points, 4 comments)
  13. 49 points, 7 submissions: j0shj0nes
    1. QLearning Robot - Finalized and Released Soon? (18 points, 4 comments)
    2. Flash Boys, HFT, frontrunning... (10 points, 3 comments)
    3. Deprecations / errata (7 points, 5 comments)
    4. Udacity lectures via GT account, versus personal account (6 points, 2 comments)
    5. Python: console-driven development (5 points, 5 comments)
    6. Buffet pandas / numpy versions (2 points, 2 comments)
    7. Quant research on earnings calls (1 point, 0 comments)
  14. 45 points, 11 submissions: Zapurza
    1. Suggestion for Strategy learner mega thread. (14 points, 1 comment)
    2. Which lectures to watch for upcoming project q learning robot? (7 points, 5 comments)
    3. In schedule file, there is no link against 'voting ensemble strategy'? Scheduled for Nov 13-20 week (6 points, 3 comments)
    4. How to add questions to the question bank? I can see there is 2% credit for that. (4 points, 5 comments)
    5. Scratch paper use (3 points, 6 comments)
    6. The big short movie link on you tube says the video is not available in your country. (3 points, 9 comments)
    7. Distance between training data date and future forecast date (2 points, 2 comments)
    8. News affecting stock market and machine learning algorithms (2 points, 4 comments)
    9. pandas import in pydev (2 points, 0 comments)
    10. Assess learner server error (1 point, 2 comments)
  15. 43 points, 23 submissions: chvbs2000
    1. Is the Strategy Learner finalized? (10 points, 3 comments)
    2. Test extra 15 test cases for marketsim (3 points, 12 comments)
    3. Confusion between the term computing "back-in time" and "going forward" (2 points, 1 comment)
    4. How to define "each transaction"? (2 points, 4 comments)
    5. How to filling the assignment into Jupyter Notebook? (2 points, 4 comments)
    6. IOError: File ../data/SPY.csv does not exist (2 points, 4 comments)
    7. Issue in Access to machines at Georgia Tech via MacOS terminal (2 points, 5 comments)
    8. Reading data from Jupyter Notebook (2 points, 3 comments)
    9. benchmark vs manual strategy vs best possible strategy (2 points, 2 comments)
    10. global name 'pd' is not defined (2 points, 4 comments)
  16. 43 points, 15 submissions: shuang379
    1. How to test my code on buffet machine? (10 points, 15 comments)
    2. Can we get the ppt for "Decision Trees"? (8 points, 2 comments)
    3. python question pool question (5 points, 6 comments)
    4. set up problems (3 points, 4 comments)
    5. Do I need another camera for scanning? (2 points, 9 comments)
    6. Is chapter 9 covered by the midterm? (2 points, 2 comments)
    7. Why grade_analysis.py could run even if I rm analysis.py? (2 points, 5 comments)
    8. python question pool No.48 (2 points, 6 comments)
    9. where could we find old versions of the rest projects? (2 points, 2 comments)
    10. where to put ml4t-libraries to install those libraries? (2 points, 1 comment)
  17. 42 points, 14 submissions: larrva
    1. is there a mistake in How-to-learn-a-decision-tree.pdf (7 points, 7 comments)
    2. maximum recursion depth problem (6 points, 10 comments)
    3. [Urgent]Unable to use proctortrack in China (4 points, 21 comments)
    4. manual_strategynumber of indicators to use (3 points, 10 comments)
    5. Assignment 2: Got 63 points. (3 points, 3 comments)
    6. Software installation workshop (3 points, 7 comments)
    7. question regarding functools32 version (3 points, 3 comments)
    8. workshop on Aug 31 (3 points, 8 comments)
    9. Mount remote server to local machine (2 points, 2 comments)
    10. any suggestion on objective function (2 points, 3 comments)
  18. 41 points, 8 submissions: Ran__Ran
    1. Any resource will be available for final exam? (19 points, 6 comments)
    2. Need clarification on size of X, Y in defeat_learners (7 points, 10 comments)
    3. Get the same date format as in example chart (4 points, 3 comments)
    4. Cannot log in GitHub Desktop using GT account? (3 points, 3 comments)
    5. Do we have notes or ppt for Time Series Data? (3 points, 5 comments)
    6. Can we know the commission & market impact for short example? (2 points, 7 comments)
    7. Course schedule export issue (2 points, 15 comments)
    8. Buying/seeking beta v.s. buying/seeking alpha (1 point, 6 comments)
  19. 38 points, 4 submissions: ProudRamblinWreck
    1. Exam 2 Study topics (21 points, 5 comments)
    2. Reddit participation as part of grade? (13 points, 32 comments)
    3. Will birds chirping in the background flag me on Proctortrack? (3 points, 5 comments)
    4. Midterm Study Guide question pools (1 point, 2 comments)
  20. 37 points, 6 submissions: gatechben
    1. Submission page for strategy learner? (14 points, 10 comments)
    2. PSA: The grading script for strategy_learner changed on the 26th (10 points, 9 comments)
    3. Where is util.py supposed to be located? (8 points, 8 comments)
    4. PSA:. The default dates in the assignment 1 template are not the same as the examples on the assignment page. (2 points, 1 comment)
    5. Schedule: Discussion of upcoming trading projects? (2 points, 3 comments)
    6. [defeat_learners] More than one column for X? (1 point, 1 comment)
  21. 37 points, 3 submissions: jgeiger
    1. Please send/announce when changes are made to the project code (23 points, 7 comments)
    2. The Big Short on Netflix for OMSCS students (week of 10/16) (11 points, 6 comments)
    3. Typo(?) for Assess_portfolio wiki page (3 points, 2 comments)
  22. 35 points, 10 submissions: ltian35
    1. selecting row using .ix (8 points, 9 comments)
    2. Will the following 2 topics be included in the final exam(online student)? (7 points, 4 comments)
    3. udacity quiz (7 points, 4 comments)
    4. pdf of lecture (3 points, 4 comments)
    5. print friendly version of the course schedule (3 points, 9 comments)
    6. about learner regression vs classificaiton (2 points, 2 comments)
    7. is there a simple way to verify the correctness of our decision tree (2 points, 4 comments)
    8. about Building an ML-based forex strategy (1 point, 2 comments)
    9. about technical analysis (1 point, 6 comments)
    10. final exam online time period (1 point, 2 comments)
  23. 33 points, 2 submissions: bhrolenok
    1. Assess learners template and grading script is now available in the public repository (24 points, 0 comments)
    2. Tutorial for software setup on Windows (9 points, 35 comments)
  24. 31 points, 4 submissions: johannes_92
    1. Deadline extension? (26 points, 40 comments)
    2. Pandas date indexing issues (2 points, 5 comments)
    3. Why do we subtract 1 from SMA calculation? (2 points, 3 comments)
    4. Unexpected number of calls to query, sum=20 (should be 20), max=20 (should be 1), min=20 (should be 1) -bash: syntax error near unexpected token `(' (1 point, 3 comments)
  25. 30 points, 5 submissions: log_base_pi
    1. The Massive Hedge Fund Betting on AI [Article] (9 points, 1 comment)
    2. Useful Python tips and tricks (8 points, 10 comments)
    3. Video of overview of remaining projects with Tucker Balch (7 points, 1 comment)
    4. Will any material from the lecture by Goldman Sachs be covered on the exam? (5 points, 1 comment)
    5. What will the 2nd half of the course be like? (1 point, 8 comments)
  26. 30 points, 4 submissions: acschwabe
    1. Assignment and Exam Calendar (ICS File) (17 points, 6 comments)
    2. Please OMG give us any options for extra credit (8 points, 12 comments)
    3. Strategy learner question (3 points, 1 comment)
    4. Proctortrack: Do we need to schedule our test time? (2 points, 10 comments)
  27. 29 points, 9 submissions: _ant0n_
    1. Next assignment? (9 points, 6 comments)
    2. Proctortrack Onboarding test? (6 points, 11 comments)
    3. Manual strategy: Allowable positions (3 points, 7 comments)
    4. Anyone watched Black Scholes documentary? (2 points, 16 comments)
    5. Buffet machines hardware (2 points, 6 comments)
    6. Defeat learners: clarification (2 points, 4 comments)
    7. Is 'optimize_something' on the way to class GitHub repo? (2 points, 6 comments)
    8. assess_portfolio(... gen_plot=True) (2 points, 8 comments)
    9. remote job != remote + international? (1 point, 15 comments)
  28. 26 points, 10 submissions: umersaalis
    1. comments.txt (7 points, 6 comments)
    2. Assignment 2: report.pdf (6 points, 30 comments)
    3. Assignment 2: report.pdf sharing & plagiarism (3 points, 12 comments)
    4. Max Recursion Limit (3 points, 10 comments)
    5. Parametric vs Non-Parametric Model (3 points, 13 comments)
    6. Bag Learner Training (1 point, 2 comments)
    7. Decision Tree Issue: (1 point, 2 comments)
    8. Error in Running DTLearner and RTLearner (1 point, 12 comments)
    9. My Results for the four learners. Please check if you guys are getting values somewhat near to these. Exact match may not be there due to randomization. (1 point, 4 comments)
    10. Can we add the assignments and solutions to our public github profile? (0 points, 7 comments)
  29. 26 points, 6 submissions: abiele
    1. Recommended Reading? (13 points, 1 comment)
    2. Number of Indicators Used by Actual Trading Systems (7 points, 6 comments)
    3. Software Install Instructions From TA's Video Not Working (2 points, 2 comments)
    4. Suggest that TA/Instructor Contact Info Should be Added to the Syllabus (2 points, 2 comments)
    5. ML4T Software Setup (1 point, 3 comments)
    6. Where can I find the grading folder? (1 point, 4 comments)
  30. 26 points, 6 submissions: tomatonight
    1. Do we have all the information needed to finish the last project Strategy learner? (15 points, 3 comments)
    2. Does anyone interested in cryptocurrency trading/investing/others? (3 points, 6 comments)
    3. length of portfolio daily return (3 points, 2 comments)
    4. Did Michael Burry, Jamie&Charlie enter the short position too early? (2 points, 4 comments)
    5. where to check participation score (2 points, 1 comment)
    6. Where to collect the midterm exam? (forgot to take it last week) (1 point, 3 comments)
  31. 26 points, 3 submissions: hilo260
    1. Is there a template for optimize_something on GitHub? (14 points, 3 comments)
    2. Marketism project? (8 points, 6 comments)
    3. "Do not change the API" (4 points, 7 comments)
  32. 26 points, 3 submissions: niufen
    1. Windows Server Setup Guide (23 points, 16 comments)
    2. Strategy Learner Adding UserID as Comment (2 points, 2 comments)
    3. Connect to server via Python Error (1 point, 6 comments)
  33. 26 points, 3 submissions: whoyoung99
    1. How much time you spend on Assess Learner? (13 points, 47 comments)
    2. Git clone repository without fork (8 points, 2 comments)
    3. Just for fun (5 points, 1 comment)
  34. 25 points, 8 submissions: SharjeelHanif
    1. When can we discuss defeat learners methods? (10 points, 1 comment)
    2. Are the buffet servers really down? (3 points, 2 comments)
    3. Are the midterm results in proctortrack gone? (3 points, 3 comments)
    4. Will these finance topics be covered on the final? (3 points, 9 comments)
    5. Anyone get set up with Proctortrack? (2 points, 10 comments)
    6. Incentives Quiz Discussion (2-01, Lesson 11.8) (2 points, 3 comments)
    7. Anyone from Houston, TX (1 point, 1 comment)
    8. How can I trace my error back to a line of code? (assess learners) (1 point, 3 comments)
  35. 25 points, 5 submissions: jlamberts3
    1. Conda vs VirtualEnv (7 points, 8 comments)
    2. Cool Portfolio Backtesting Tool (6 points, 6 comments)
    3. Warren Buffett wins $1M bet made a decade ago that the S&P 500 stock index would outperform hedge funds (6 points, 12 comments)
    4. Windows Ubuntu Subsystem Putty Alternative (4 points, 0 comments)
    5. Algorithmic Trading Of Digital Assets (2 points, 0 comments)
  36. 25 points, 4 submissions: suman_paul
    1. Grade statistics (9 points, 3 comments)
    2. Machine Learning book by Mitchell (6 points, 11 comments)
    3. Thank You (6 points, 6 comments)
    4. Assignment1 ready to be cloned? (4 points, 4 comments)
  37. 25 points, 3 submissions: Spareo
    1. Submit Assignments Function (OS X/Linux) (15 points, 6 comments)
    2. Quantsoftware Site down? (8 points, 38 comments)
    3. ML4T_2017Spring folder on Buffet server?? (2 points, 5 comments)
  38. 24 points, 14 submissions: nelsongcg
    1. Is it realistic for us to try to build our own trading bot and profit? (6 points, 21 comments)
    2. Is the risk free rate zero for any country? (3 points, 7 comments)
    3. Models and black swans - discussion (3 points, 0 comments)
    4. Normal distribution assumption for options pricing (2 points, 3 comments)
    5. Technical analysis for cryptocurrency market? (2 points, 4 comments)
    6. A counter argument to models by Nassim Taleb (1 point, 0 comments)
    7. Are we demandas to use the sample for part 1? (1 point, 1 comment)
    8. Benchmark for "trusting" your trading algorithm (1 point, 5 comments)
    9. Don't these two statements on the project description contradict each other? (1 point, 2 comments)
    10. Forgot my TA (1 point, 6 comments)
  39. 24 points, 11 submissions: nurobezede
    1. Best way to obtain survivor bias free stock data (8 points, 1 comment)
    2. Please confirm Midterm is from October 13-16 online with proctortrack. (5 points, 2 comments)
    3. Are these DTlearner Corr values good? (2 points, 6 comments)
    4. Testing gen_data.py (2 points, 3 comments)
    5. BagLearner of Baglearners says 'Object is not callable' (1 point, 8 comments)
    6. DTlearner training RMSE none zero but almost there (1 point, 2 comments)
    7. How to submit analysis using git and confirm it? (1 point, 2 comments)
    8. Passing kwargs to learners in a BagLearner (1 point, 5 comments)
    9. Sampling for bagging tree (1 point, 8 comments)
    10. code failing the 18th test with grade_learners.py (1 point, 6 comments)
  40. 24 points, 4 submissions: AeroZach
    1. questions about how to build a machine learning system that's going to work well in a real market (12 points, 6 comments)
    2. Survivor Bias Free Data (7 points, 5 comments)
    3. Genetic Algorithms for Feature selection (3 points, 5 comments)
    4. How far back can you train? (2 points, 2 comments)
  41. 23 points, 9 submissions: vsrinath6
    1. Participation check #3 - Haven't seen it yet (5 points, 5 comments)
    2. What are the tasks for this week? (5 points, 12 comments)
    3. No projects until after the mid-term? (4 points, 5 comments)
    4. Format / Syllabus for the exams (2 points, 3 comments)
    5. Has there been a Participation check #4? (2 points, 8 comments)
    6. Project 3 not visible on T-Square (2 points, 3 comments)
    7. Assess learners - do we need to check is method implemented for BagLearner? (1 point, 4 comments)
    8. Correct number of days reported in the dataframe (should be the number of trading days between the start date and end date, inclusive). (1 point, 0 comments)
    9. RuntimeError: Invalid DISPLAY variable (1 point, 2 comments)
  42. 23 points, 8 submissions: nick_algorithm
    1. Help with getting Average Daily Return Right (6 points, 7 comments)
    2. Hint for args argument in scipy minimize (5 points, 2 comments)
    3. How do you make money off of highly volatile (high SDDR) stocks? (4 points, 5 comments)
    4. Can We Use Code Obtained from Class To Make Money without Fear of Being Sued (3 points, 6 comments)
    5. Is the Std for Bollinger Bands calculated over the same timespan of the Moving Average? (2 points, 2 comments)
    6. Can't run grade_learners.py but I'm not doing anything different from the last assignment (?) (1 point, 5 comments)
    7. How to determine value at terminal node of tree? (1 point, 1 comment)
    8. Is there a way to get Reddit announcements piped to email (or have a subsequent T-Square announcement published simultaneously) (1 point, 2 comments)
  43. 23 points, 1 submission: gong6
    1. Is manual strategy ready? (23 points, 6 comments)
  44. 21 points, 6 submissions: amchang87
    1. Reason for public reddit? (6 points, 4 comments)
    2. Manual Strategy - 21 day holding Period (4 points, 12 comments)
    3. Sharpe Ratio (4 points, 6 comments)
    4. Manual Strategy - No Position? (3 points, 3 comments)
    5. ML / Manual Trader Performance (2 points, 0 comments)
    6. T-Square Submission Missing? (2 points, 3 comments)
  45. 21 points, 6 submissions: fall2017_ml4t_cs_god
    1. PSA: When typing in code, please use 'formatting help' to see how to make the code read cleaner. (8 points, 2 comments)
    2. Why do Bollinger Bands use 2 standard deviations? (5 points, 20 comments)
    3. How do I log into the [email protected]? (3 points, 1 comment)
    4. Is midterm 2 cumulative? (2 points, 3 comments)
    5. Where can we learn about options? (2 points, 2 comments)
    6. How do you calculate the analysis statistics for bps and manual strategy? (1 point, 1 comment)
  46. 21 points, 5 submissions: Jmitchell83
    1. Manual Strategy Grades (12 points, 9 comments)
    2. two-factor (3 points, 6 comments)
    3. Free to use volume? (2 points, 1 comment)
    4. Is MC1-Project-1 different than assess_portfolio? (2 points, 2 comments)
    5. Online Participation Checks (2 points, 4 comments)
  47. 21 points, 5 submissions: Sergei_B
    1. Do we need to worry about missing data for Asset Portfolio? (14 points, 13 comments)
    2. How do you get data from yahoo in panda? the sample old code is below: (2 points, 3 comments)
    3. How to fix import pandas as pd ImportError: No module named pandas? (2 points, 4 comments)
    4. Python Practice exam Question 48 (2 points, 2 comments)
    5. Mac: "virtualenv : command not found" (1 point, 2 comments)
  48. 21 points, 3 submissions: mharrow3
    1. First time reddit user .. (17 points, 37 comments)
    2. Course errors/types (2 points, 2 comments)
    3. Install course software on macOS using Vagrant .. (2 points, 0 comments)
  49. 20 points, 9 submissions: iceguyvn
    1. Manual strategy implementation for future projects (4 points, 15 comments)
    2. Help with correlation calculation (3 points, 15 comments)
    3. Help! maximum recursion depth exceeded (3 points, 10 comments)
    4. Help: how to index by date? (2 points, 4 comments)
    5. How to attach a 1D array to a 2D array? (2 points, 2 comments)
    6. How to set a single cell in a 2D DataFrame? (2 points, 4 comments)
    7. Next assignment after marketsim? (2 points, 4 comments)
    8. Pythonic way to detect the first row? (1 point, 6 comments)
    9. Questions regarding seed (1 point, 1 comment)
  50. 20 points, 3 submissions: JetsonDavis
    1. Push back assignment 3? (10 points, 14 comments)
    2. Final project (9 points, 3 comments)
    3. Numpy versions (1 point, 2 comments)
  51. 20 points, 2 submissions: pharmerino
    1. assess_portfolio test cases (16 points, 88 comments)
    2. ML4T Assignments (4 points, 6 comments)

Top Commenters

  1. tuckerbalch (2296 points, 1185 comments)
  2. davebyrd (1033 points, 466 comments)
  3. yokh_cs7646 (320 points, 177 comments)
  4. rgraziano3 (266 points, 147 comments)
  5. j0shj0nes (264 points, 148 comments)
  6. i__want__piazza (236 points, 127 comments)
  7. swamijay (227 points, 116 comments)
  8. _ant0n_ (205 points, 149 comments)
  9. ml4tstudent (204 points, 117 comments)
  10. gatechben (179 points, 107 comments)
  11. BNielson (176 points, 108 comments)
  12. jameschanx (176 points, 94 comments)
  13. Artmageddon (167 points, 83 comments)
  14. htrajan (162 points, 81 comments)
  15. boyko11 (154 points, 99 comments)
  16. alyssa_p_hacker (146 points, 80 comments)
  17. log_base_pi (141 points, 80 comments)
  18. Ran__Ran (139 points, 99 comments)
  19. johnsmarion (136 points, 86 comments)
  20. jgorman30_gatech (135 points, 102 comments)
  21. dyllll (125 points, 91 comments)
  22. MikeLachmayr (123 points, 95 comments)
  23. awhoof (113 points, 72 comments)
  24. SharjeelHanif (106 points, 59 comments)
  25. larrva (101 points, 69 comments)
  26. augustinius (100 points, 52 comments)
  27. oimesbcs (99 points, 67 comments)
  28. vansh21k (98 points, 62 comments)
  29. W1redgh0st (97 points, 70 comments)
  30. ybai67 (96 points, 41 comments)
  31. JuanCarlosKuriPinto (95 points, 54 comments)
  32. acschwabe (93 points, 58 comments)
  33. pharmerino (92 points, 47 comments)
  34. jgeiger (91 points, 28 comments)
  35. Zapurza (88 points, 70 comments)
  36. jyoms (87 points, 55 comments)
  37. omscs_zenan (87 points, 44 comments)
  38. nurobezede (85 points, 64 comments)
  39. BelaZhu (83 points, 50 comments)
  40. jason_gt (82 points, 36 comments)
  41. shuang379 (81 points, 64 comments)
  42. ggatech (81 points, 51 comments)
  43. nitinkodial_gatech (78 points, 59 comments)
  44. harshsikka123 (77 points, 55 comments)
  45. bkeenan7 (76 points, 49 comments)
  46. moxyll (76 points, 32 comments)
  47. nelsongcg (75 points, 53 comments)
  48. nickzelei (75 points, 41 comments)
  49. hunter2omscs (74 points, 29 comments)
  50. pointblank41 (73 points, 36 comments)
  51. zheweisun (66 points, 48 comments)
  52. bs_123 (66 points, 36 comments)
  53. storytimeuva (66 points, 36 comments)
  54. sva6 (66 points, 31 comments)
  55. bhrolenok (66 points, 27 comments)
  56. lingkaizuo (63 points, 46 comments)
  57. Marvel_this (62 points, 36 comments)
  58. agifft3_omscs (62 points, 35 comments)
  59. ssung40 (61 points, 47 comments)
  60. amchang87 (61 points, 32 comments)
  61. joshuak_gatech (61 points, 30 comments)
  62. fall2017_ml4t_cs_god (60 points, 50 comments)
  63. ccrouch8 (60 points, 45 comments)
  64. nick_algorithm (60 points, 29 comments)
  65. JetsonDavis (59 points, 35 comments)
  66. yjacket103 (58 points, 36 comments)
  67. hilo260 (58 points, 29 comments)
  68. coolwhip1234 (58 points, 15 comments)
  69. chvbs2000 (57 points, 49 comments)
  70. suman_paul (57 points, 29 comments)
  71. masterm (57 points, 23 comments)
  72. RolfKwakkelaar (55 points, 32 comments)
  73. rpb3 (55 points, 23 comments)
  74. venkatesh8 (54 points, 30 comments)
  75. omscs_avik (53 points, 37 comments)
  76. bman8810 (52 points, 31 comments)
  77. snladak (51 points, 31 comments)
  78. dfihn3 (50 points, 43 comments)
  79. mlcrypto (50 points, 32 comments)
  80. omscs-student (49 points, 26 comments)
  81. NellVega (48 points, 32 comments)
  82. booglespace (48 points, 23 comments)
  83. ccortner3 (48 points, 23 comments)
  84. caa5042 (47 points, 34 comments)
  85. gcalma3 (47 points, 25 comments)
  86. krushnatmore (44 points, 32 comments)
  87. sn_48 (43 points, 22 comments)
  88. thenewprofessional (43 points, 16 comments)
  89. urider (42 points, 33 comments)
  90. gatech-raleighite (42 points, 30 comments)
  91. chrisong2017 (41 points, 26 comments)
  92. ProudRamblinWreck (41 points, 24 comments)
  93. kramey8 (41 points, 24 comments)
  94. coderafk (40 points, 28 comments)
  95. niufen (40 points, 23 comments)
  96. tholladay3 (40 points, 23 comments)
  97. SaberCrunch (40 points, 22 comments)
  98. gnr11 (40 points, 21 comments)
  99. nadav3 (40 points, 18 comments)
  100. gt7431a (40 points, 16 comments)

Top Submissions

  1. [Project Questions] Unit Tests for assess_portfolio assignment by reyallan (58 points, 52 comments)
  2. [Project Questions] Unit Tests for optimize_something assignment by agifft3_omscs (53 points, 94 comments)
  3. Proper git workflow by jan-laszlo (43 points, 19 comments)
  4. Exam 2 Information by yokh_cs7646 (39 points, 40 comments)
  5. A little more on Pandas indexing/slicing ([] vs ix vs iloc vs loc) and numpy shapes by davebyrd (37 points, 10 comments)
  6. Project 1 Megathread (assess_portfolio) by davebyrd (34 points, 466 comments)
  7. defeat_learner test case by swamijay (34 points, 38 comments)
  8. Project 2 Megathread (optimize_something) by tuckerbalch (33 points, 475 comments)
  9. project 3 megathread (assess_learners) by tuckerbalch (27 points, 1130 comments)
  10. Deadline extension? by johannes_92 (26 points, 40 comments)

Top Comments

  1. 34 points: jgeiger's comment in QLearning Robot project megathread
  2. 31 points: coolwhip1234's comment in QLearning Robot project megathread
  3. 30 points: tuckerbalch's comment in Why Professor is usually late for class?
  4. 23 points: davebyrd's comment in Deadline extension?
  5. 20 points: jason_gt's comment in What would be a good quiz question regarding The Big Short?
  6. 19 points: yokh_cs7646's comment in For online students: Participation check #2
  7. 17 points: i__want__piazza's comment in project 3 megathread (assess_learners)
  8. 17 points: nathakhanh2's comment in Project 2 Megathread (optimize_something)
  9. 17 points: pharmerino's comment in Midterm study Megathread
  10. 17 points: tuckerbalch's comment in Midterm grades posted to T-Square
Generated with BBoe's Subreddit Stats (Donate)
submitted by subreddit_stats to subreddit_stats [link] [comments]

Some basic algo trading questions on platform and data sources

Hi algoTrading,
I'm new to algorithmic trading, and was hoping to get some advice on how to get set up. My questions fall into two categories, first, what platforms do you guys use for backtesting, and second, where do you get your data from.
First, to clarify what I mean by 'platform'. I am looking for a mechanism which will enable me to (relatively) easily go from idea to backtest. For instance, in quantopian, you can write a strategy, and backtest it. The backtest will generate relevant metrics, and you do not have to worry about the code linking the strategy to the data. Additionally, if the platform allowed for more robust testing, like walk forward testing, and monte carlo analysis, that would be ideal.
I have seen two websites in particular, which seem to be popular. Quantconnect, and Quantopian. Now, I have looked into both of these, and I am not sure they are what I am after. First, I am a little skeptical of putting out algo's which I have poured my time hard work into, on sites which execute the code remotely. Second, I am more interested in derivatives trading than stock trading. If I were to use an online platform, I would prefer one which included data on Options/Futures/Forex. Additionally, while the language isn't a huge deal, I have experience in Java, and am learning Python for data Science applications. If there were a platform which let me use Python or Java, that would be ideal.
Second is the data source. This is a little redundant for online platforms, like quantopian, which are already integrated with the data. However, I have also been looking at Amibroker, which is executed locally on your computer, and does not come bundled with data. I have seen that you can get some data from IB, thinking about switching over to them, just for their data feeds. But really, the data I am interested in would be 1) EOD options data. 2) intra day futures/forex data. 3) intraday stocks/indexes/ETF data.
I am willing to pay for tools/data if necessary, but would prefer to keep costs as minimal as possible, as I am a losing trader with a day job.
Thanks in advance!
submitted by IAmBoredAsHell to algotrading [link] [comments]

How can I call a kernel function for multiple Thrust object member functions in CUDA?

Please note that I am an absolute beginner with CUDA, and everything below is untested pseudocode. I'm coming from JavaScript, and my C++ is also super rusty, so I apologize for my ignorance :)
I am attempting to use CUDA for backtesting many different forex strategies.
Using Thrust, and I have 1000 objects instantiated from a class (pseudocode):
#include  #include  #include  #define N 1000 typedef struct dataPoint { ... } dataPoint; class Strategy { public: __device__ __host__ Strategy(...) { ... } __device__ __host__ void backtest(dataPoint data) { ... } }; int main() { dataPoint data[100000]; thrust::device_ptr strategies[1000]; int i; // Instantiate 5000 strategies. for (i=0; i<1000; i++) { thrust::device_ptr strategies[i] = thrust::device_new(...); } // Iterate over all 100000 data points. for (i=0; i<100000; i++) { // Somehow run .backtest(data[j]) on each strategy here. // i.e. Run backtest() in parallel for all 1000 // strategy objects here. } } 
Now let's say I'd like to run the .backtest() method on each object for each item in data. Procedurally I would do the following:
// Iterate over all 100000 data points. for (j=0; j<100000; j++) { // Iterate over all 1000 strategies. for (i=0; i<1000; i++) { strategies[i].backtest(data[j]); } } 
How might I accomplish this using CUDA such that .backtest() runs in parallel for all strategies each iteration j through the data?
If I have to completely rearchitect everything, so be it -- I'm open to whatever is necessary. If this isn't possible with classes, then so be it.
submitted by chaddjohnson to CUDA [link] [comments]

Metatrader 4 - 99% Back-testing in 5 Simple Steps - YouTube How I BACKTEST a Forex Trading Strategy in 2020 - YouTube 99% Backtesting on MT4 with the new Tick Data Suite v2 ... How to Backtest A Trading Strategy in Excel - YouTube How to BACKTEST a Forex Trading Strategy - YouTube How To Backtest An Indicator With Metatrader 4 - YouTube Back-Testing Jam Session: Backtest Strategy and Test Plan - Forex Trading Strategy

Forex Tools for Trading Analysis Backtest Your Forex Trading Strategy I'd like to backtest some strategies with forex data, but I'm not sure where to look for a good solution. I have an Oanda practice account, but can't figure out how to get historical/backtest data. I've also used Backtrader for stock data, but can't figure out whether there's a way to pull in forex data. I work in primarily in Python but I'm ... How to Backtest a Forex Trading Strategy For Free? We live in a great time. Technology is getting better and cheaper. So if you have a very limited budget, then I have some great news! You can start backtesting for free. Here are the simplest ways that I would recommend getting started. Manual. If manual trading and testing is your thing, then I would recommend starting with TradingView. I ... Forex backtesting software is a type of program that allows traders to test potential trading strategies using historical data. The software recreates the behaviour of trades and their reaction to a Forex trading strategy, and the resulting data can then be used to measure and optimise the effectiveness of a given strategy before applying it to real market conditions. Before you can begin trading your strategy on past market data, you must do a few things to prepare yourself for backtesting. The first step is to have a computer with Windows on it. If you want to backtest on a Mac computer, consider installing Windows in a VirtualBox. It’s beyond the scope of this guide, but a quick Google search will help ... Die verlässlichsten Forex Daten werden auf beliebten Seiten wie Tick Data, Inc. oder CQG Data Factory angeboten. Wie funktioniert Backtesting? Abhängig von der Backtesting Software, die Sie verwenden, erhalten Sie eine Vielzahl von Indikatoren, wie zum Beispiel: Total Return on Equity (ROE): Die als Prozentsatz ausgedrückte Rendite des gesamten investierten Eigenkapitals. Gesamtgewinne und ... Another advantage that forex tester 4 has compared to soft4fx is the amount of data they offer for backtesting. Forex tester comes with 18 years of historical data from 12 different brokers. This is 10 years from 2 brokers for soft4fx simulator, however, you can use as much historical data as your broker provides on both of them.

[index] [13298] [4812] [17193] [29123] [8694] [29043] [28921] [3744] [18912] [19599]

Metatrader 4 - 99% Back-testing in 5 Simple Steps - YouTube

In this video you can see how to backtest an indicator in the Metatrader 4 (MT4) platform. https://mql4tradingautomation.com/metatrader-backtesting-expert-ad... This video will show you How to Backtest a Forex Trading Strategy, as well as 3 TIPS on BACKTESTING... Trading Platform I Use: https://www.tradingview.com/... Demonstrates how to back-test your Expert Advisers (EAs) with Metatrader and get 99% modelling quality in 5 simple steps. The back-test is executed with qualit... The Ultimate MT4 Backtesting Guidehttps://www.mt4backtest.com/This video shows how to use the new Tick Data Suite 2 on MetaTrader 4 trading terminal to reach 99... Backtesting software simulates your strategy on historical data and provides a backtesting report, which allows you to conduct proper trading system analysis. If you are new to forex trading and ... Today we kick off #TheTradingEssentials Series, starting out with How I Backtest a Forex Trading Strategy in 2020... ----- Trading Platform I Use: https:... This video shows how anybody can test their own trading strategies using Excel. I demonstrate how to use historic price data and to calculate technical indic...

http://binaryoptiontrade.riubleatuveasouter.tk