Theta Explained

Best Binary Options Brokers 2020:
  • Binarium

    The Best Binary Options Broker 2020!
    Perfect Choice For Beginners!
    Free Demo Account!
    Free Trading Education!
    Get Your Sing-Up Bonus Now!

  • Binomo

    Only For Experienced Traders!

What is Theta Fuel? What is the Theta testnet channel? Initial FAQs

With the introduction of Theta Fuel into the Theta protocol and the launch of our testnet channel, we know the community has lots of questions. These initial FAQs should provide some guidance, and we will continue to update the community as we further develop the token economics through the Theta testnet phase. You can also stop by our Telegram channel here for the latest news and to discuss with the Theta team!

Theta Fuel FAQs

What is Theta Fuel and what will it be used for?

Theta Fuel is the operational token of the Theta protocol. Users will use Theta Fuel to complete transactions like paying a relay node to provide you with a video stream, or for deploying or interacting with smart contracts. Relay nodes earn Theta Fuel for every video stream they relay to other users on the network.

How will Theta Fuel be generated? At what rate?

The genesis distribution of Theta Fuel will happen when then Theta mainnet launches on March 15th. For each Theta Token that you hold when the Theta Mainnet launches, you will also receive 5 Theta Fuel to seed the ecosystem. To ensure you receive this initial distribution, make sure to follow our mainnet token swap procedures. After the initial distribution of 5,000,000,000 TFUEL (5 for each of the 1 billion Theta Tokens), the supply will increase at an initial annual target rate of 5%. The new supply rate will be determined at the protocol level, and can be adjusted as needed by protocol consensus to provide the appropriate amount of new supply as demanded by platforms on the Theta Network.

Will each viewer need to pay Theta Fuel to pull video streams on the Theta Network?

Technically that’s true at the protocol level, but the actual model implemented on, and our initial partners like MBN and Samsung VR, is that the cost of TFUEL falls on the video platform. Platforms subsidize users with the TFUEL necessary to pull video streams from relayers on the Theta Network. This makes sense, because video platforms are the ones most directly gaining from getting more viewers to pull their video stream from the Theta Network, in the form of lower CDN costs and higher user engagement. We think it’s critical that the end-user never has to go out and purchase any TFUEL tokens just to watch videos on the Theta Network — it’s just too much of a friction point for adoption.

What’s the difference between the Theta Token and Theta Fuel?

Theta Token (THETA): The governance token of the Theta protocol. THETA is used to stake as a Validator or Guardian node, contributing to block production and the protocol governance of the Theta Network. By staking and running a node, users will earn a proportional amount of the new TFUEL generated. The supply of THETA is fixed at 1 billion and will never increase. THETA currently exists as an ERC20 token and is traded on major exchanges. At Mainnet launch on March 15th, ERC20 THETA will be replaced by new THETA tokens on the Theta blockchain at a 1:1 ratio.

Theta Fuel (TFUEL): The operational token of the Theta protocol. TFUEL powers on-chain operations like payments to relayers for sharing a video stream, or for deploying or interacting with smart contracts. Relayers earn TFUEL for every video stream they relay to other users on the network. You can think of Theta Fuel as the “gas” of the protocol. At Mainnet launch on March 15th, TFUEL will be created as a native token on the Theta blockchain.

Why introduce a second currency at all?

The primary reasons are to separate the uses of staking/governance (with Theta) and operations/transactions (with Theta Fuel), and to enhance protocol security. You can read more about this reasoning in our blog post on governance here.

Best Binary Options Brokers 2020:
  • Binarium

    The Best Binary Options Broker 2020!
    Perfect Choice For Beginners!
    Free Demo Account!
    Free Trading Education!
    Get Your Sing-Up Bonus Now!

  • Binomo

    Only For Experienced Traders!

Theta testnet channel FAQs

What is the Theta testnet channel?

The testnet channel is the first example of the Theta blockchain and streaming protocol being integrated with a video platform, Users can share bandwidth with other peers, earning (test) Theta Fuel. This channel allows the Theta team to gather data on how the blockchain and streaming protocol are performing, so we can optimize and scale the protocol ahead of our mainnet launch on March 15th.

What browsers and operating systems are supported for the testnet?

Currently, the testnet channel supports Chrome and Firefox on PC and Mac. Support for iOS/Safari and Android will also be coming soon.

Why am I not relaying my stream to peers / pulling my stream from peers?

Since Theta Network is a peer-to-peer protocol, it is possible that you don’t have any nearby peers and/or the peers you are connected to are too far away to share bandwidth effectively.

I am sharing streams with my peers, why am I not earning any Theta Fuel?

One reason you may not be earning Theta Fuel is because when users show up on the test channel page, they immediately attempt to pull streams from other users on Theta Network. But, their computer isn’t linked to a specific Theta Fuel wallet until they login to SLIVER. So what’s happening is they are pulling streams from you, you are sharing your bandwidth with them, but because they have no Theta Fuel wallet attached you aren’t getting Theta Fuel in return!

This couldn’t happen on the mainnet of course, because the protocol would require them to compensate you with Theta Fuel! We could cut off the non-logged in users, but for the moment we want to err to the side of maximizing peers so we can maximize bandwidth offload. We are also working on an alternative fix that should make sure each users earns Theta Fuel for the bandwidth they are sharing.

Will Theta Fuel I earn on the testnet be carried over to the mainnet when it launches in Q1 2020?

Yes, as of February 20th any Theta Fuel you have will be carried over to the testnet and will be real on-chain Theta Fuel.

The test channel isn’t working!

If you are having issues with the test channel, please fill out this Google form to tell us about it! Your feedback will help the Theta engineering team improve the protocol and make for a successful mainnet launch.

Plain English explanation of Theta notation?

What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics.

How theta notation is different from the Big O notation ? Could anyone explain in plain English?

In algorithm analysis how there are used? I am confused?

2 Answers 2

If an algorithm’s run time is Big Theta(f(n)), it is asymptotically bounded above and below by f(n). Big O is the same except that the bound is only above.

Intuitively, Big O(f(n)) says “we can be sure that, ignoring constant factors and terms, the run time never exceeds f(n).” In rough words, if you think of run time as “bad”, then Big O is a worst case. Big Theta(f(n)) says “we can be sure that, ignoring constant factors and terms, the run time always varies as f(n).” In other words, Big Theta is a known tight bound: it’s both worst case and best case.

A final try at intuition: Big O is “one-sided.” O(n) run time is also O(n^2) and O(2^n). This is not true with Big Theta. If you have an algorithm run time that’s O(n), then you already have a proof that it’s not Big Theta(n^2). It may or may not be Big Theta(n)

An example is comparison sorting. Information theory tells us sorting requires at least ceiling(n log n) comparisons, and we have actually invented O(n log n) algorithms (where n is number of comparisons), so sorting comparisons are Big Theta(n log n).

Optimizers Explained – Adam, Momentum and Stochastic Gradient Descent

Picking the right optimizer with the right parameters, can help you squeeze the last bit of accuracy out of your neural network model.

Casper Hansen

MSc AI Student @ DTU. This is my Machine Learning journey ‘From Scratch’. Conveying what I learned, in an easy-to-understand fashion is my priority.

More posts by Casper Hansen.

Casper Hansen

Picking the right optimizer with the right parameters, can help you squeeze the last bit of accuracy out of your neural network model. In this article, optimizers are explained from the classical to the newer approaches.

This post could be seen as a part three of how neural networks learn; in the previous posts, we have proposed the update rule as the one in gradient descent. Now we are exploring better and newer optimizers. If you want to know how we do a forward and backwards pass in a neural network, you would have to read the first part – especially how we calculate the gradient is covered in great detail.

If you are new to neural networks, you probably won’t understand this post, if you don’t read the first part.

I want to add, before explaining the different optimizers, that you really should read Sebastian Ruder’s paper An overview of gradient descent optimization algorithms. It’s a great resource that briefly describes many of the optimizers available today.

Table of Contents (Click To Scroll)

Stochastic Gradient Descent

This is the basic algorithm responsible for having neural networks converge, i.e. we shift towards the optimum of the cost function. Multiple gradient descent algorithms exists, and I have mixed them together in previous posts. Here, I am not talking about batch (vanilla) gradient descent or mini-batch gradient descent.

The basic difference between batch gradient descent (BGD) and stochastic gradient descent (SGD), is that we only calculate the cost of one example for each step in SGD, but in BGD, we have to calculate the cost for all training examples in the dataset. Trivially, this speeds up neural networks greatly. Exactly this is the motivation behind SGD.

The equation for SGD is used to update parameters in a neural network – we use the equation to update parameters in a backwards pass, using backpropagation to calculate the gradient $\nabla$:

This is how the equation is presented formally, and here is what each symbol means:

  • $\theta$ is a parameter (theta), e.g. your weights, biases and activations. Notice that we only update a single parameter for the neural network here, i.e. we could update a single weight.
  • $\eta$ is the learning rate (eta), but also sometimes alpha $\alpha$ or gamma $\gamma$ is used.
  • $\nabla$ is the gradient (nabla), which is taken of $J$. Gradient calculations is already explained extremely well in my other post.
  • $J$ is formally known as objective function, but most often it’s called cost function or loss function.

We take each parameter theta $\theta$ and update it by taking the original parameter $\theta$ and subtract the learning rate $\eta$ times the ratio of change $\nabla J(\theta)$.

$J(\theta;\, x, \, y)$ just means that we input our parameter theta $\theta$ along with a training example and label (e.g. a class). The semicolon is used to indicate that the parameter theta $\theta$ is different from the training example and label, which are separated by a comma.

Note that moving forward, the subscript $\theta$ in $\nabla_\theta$ will be left out for simplicity.

We can visualize what happens to a single weight $w$ in a cost function $C(w)$ (same as $J$). Naturally, what happens is that we find the derivative of the parameter $\theta$, which is $w$ in this case, and we update the parameter accordingly to the equation above.

If the gradient $\nabla$ of the partial derivatives is positive, we step left, else we step right when negative. GIF’ed from 3blue1brown video and added captions.

Okay, we got some value theta $\theta$ and eta $\eta$ to work with. But what is that last thing in the equation, what does it mean? Let’s expand into the equation from the prior post (which you should have read).

Well this is now just a partial derivative, i.e. we find the cost function $C$, and inside that function, we find the derivative of theta $\theta$, but keep the rest of the function constant (we don’t touch the rest). The assumption here is that our training example with a label is provided, which is why it was removed on the right side.

We could even replace some of the terms to make it more readable. Say we wanted to update a weight $w$, with the learning rate $0.3$ and a cost function C:

Well, we assume that we know $w$, so the only thing stopping us from calculating the equation is the last term. But I won’t go into that, since that was part of my last post.

Moving forward, note and remember that

If you don’t know what this means, perhaps you should visit neural networks post, which in detail will explain backpropgation, and what gradients and partial derivatives means.

Classical Algorithm and Code

For each parameter theta $\theta$, from $1$ to $j$, we update according to this equation.

Usually, this is equation is wrapped in a repeat until convergence, i.e. we update each parameter, for each training example, until we reach a local minimum.

This is a local minima.

Running through the dataset multiple times is usually done, and is called an epoch, and for each epoch, we should randomly select a subset of the data – this is the stochasticity of the algorithm.

Say we want to translate this to some pseudo code. This is relatively easy, except for we will leave the function for calculating gradients left out.

Ending the walkthrough of SGD, it is only right to propose some pros and cons of the optimizer. Clearly, it is one of the older algorithms for optimization in neural networks, but nevertheless, it is also comparatively the easiest to learn. The cons are mostly with regards to newer and better optimizers, and is perhaps hard to explain at this point. The reason for the cons will become clear, once I present the next optimizers.

  • Relatively fast compared to the older gradient descent approaches
  • SGD is comparatively easy to learn for beginners, since it is not as math heavy as the newer approaches to optimizers
  • Converges slower than newer algorithms
  • Has more problems with being stuck in a local minimum than newer approaches
  • Newer approaches outperform SGD in terms of optimizing the cost function


Simply put, the momentum algorithm helps us progress faster in the neural network, negatively or positively, to the ball analogy. This helps us get to a local minimum faster.

Motivation for momentum

For each time we roll the ball down the hill (for each epoch), the ball rolls faster towards the local minima in the next iteration. This makes us more likely to reach a better local minima (or perhaps global minima) than we could have with SGD.

When optimizing the cost function for a weight, we might imagine a ball rolling down a hill amongst many hills. We hope that we get to some form of optimum.

The slope of the cost function is not actually such a smooth curve, but it’s easier to plot to show the concept of the ball rolling down the hill. The function will often be much more complex, hence we might actually get stuck in a local minimum or significantly slowed down. Obviously, this is not desirable. The terrain is not smooth, it has obstacles and weird shapes in very high-dimensional space – for instance, the concept would look like this in 2D:

Ball stuck on a hilly 2D curve. Tweaked image from Quora user

In the above case, we are stuck at a local minimum, and the motivation is clear – we need a method to handle these situations, perhaps to never get stuck in the first place.

Now we know why we should use momentum, let’s introduce more specifically what it means, by explaining the mathematics behind it.

Explanation of momentum

Momentum is where we add a temporal element into our equation for updating the parameters of a neural network – that is, an element of time.

This time element increases the momentum of the ball by some amount. This amount is called gamma $\gamma$, which is usually initialized to $0.9$. But we also multiply that by the previous update $v_t$.

What I want you to realize is that our function for momentum is basically the same as SGD, with an extra term:

Let’s just make this $100\%$ clear:

  • Theta $\theta$ is a parameter, e.g. your weights, biases or activations
  • Eta $\eta$ is your learning rate, also sometimes written as alpha $\alpha$ or epsilon $\epsilon$.
  • Objective function $J$, i.e. the function which we are trying to optimize. Also called cost function or loss function (although they have different meanings).
  • Gamma $\gamma$, a constant term. Also called the momentum, and rho $\rho$ is also used instead of $\gamma$ sometimes.
  • Last change (last update) to $\theta$ is called $v_t$.

Although it’s very similar to SGD, I have left out some elements for simplicity, because we can easily get confused by the indexing and notational burden that comes with adding temporal elements.

Let’s add those elements now. First the temporal element, then the explanation of $v_t$.

If you want to play with momentum and learning rate, I recommend visiting distill’s page for Why Momentum Really Works.

Adding Time Steps $t$

Adding the notion of time; say we want to update the current parameter $\theta$, how would we go about that? Well, we would first have to define which parameter $\theta$ we want to update at a given time. And how do we do that?

One way to track where we are in time, is to assign a variable of time $t$ to $\theta$. The variable $t$ would work like a counter; we increase $t$ by one for each update of a certain parameter.

How might this look in a mathematical sense? Well, we just subscript every variable that are subject to change over time. That is, the values for our parameter $\theta$ will definitely change over time, but the variable for the learning rate $\eta$ remains fixed.

Theta $\theta$ at time step $t$ equals $\theta_t$ minus the learning rate, times the gradient of the objective function $J$ with respect to the parameter $\theta_t$, plus a momentum term gamma $\gamma$, times the change to $\theta$ at the last time step $t-1$.

There it is, we added the temporal element. But we are not done, what does $v_t$ mean? I explained it as the previous update, but what does that entail?

Momentum Term

I told you about the ball rolling faster and faster down the hill, as it accumulates more speed for each epoch, and this term helps us do exactly that.

What helps us accumulate more speed for each epoch is the momentum term, which consists of a constant $\gamma$ and the previous update $\gamma v_$ to $\theta$. But the previous update to $\theta$ also includes the second last update to $\theta$ and so on.

Essentially, we store the calculations of the gradients (the updates) for use in all the next updates to a parameter $\theta$. This exact property causes the ball to roll faster down the hill, i.e. we converge faster because now we move forward faster.

Instead of writing $v_$, which includes $v_$ in it’s equation and so on, we could use summation, which might be more clear. We can summarize at tau $\tau$ equal to $1$, all the way up to the current time step $t$.

The intuition of why momentum works (besides the theory) can effectively be shown with a contour plot – which is a long and narrow valley in this case.

We can think of optimizing a cost function with SGD as oscillating up and down along the y-axis, and the bigger the oscillation up and down the y-axis, the slower we progress along the x-axis. Intuitively, it then makes sense to add something (momentum) to help us oscillate less, thus moving faster along the x-axis towards the local minima.

The next notation for the notion of change might be more explainable and easier to understand. You may skip the next header, but I think it’s a good alternative way of thinking about momentum. You will learn a different notation, which can enable you to understand other papers using similar notation.

Different Notation: A second explanation

In that paper that I linked at the start momentum, it’s described in a similar way but with different notation, so let’s just cover that as well. They defined it for a weight instead of a parameter $\theta$, and they use $E$ for error function, which is the same as $J$ for objective or cost function. They also use the Delta symbol $\Delta$ to indicate change:

This is pretty straight forward, so let’s replace the parameters of the equation with the parameters of what I just explained.

  • $w_t$ becomes $\theta_t$
  • $E(w)$ becomes $J(\theta_)$
  • Rho $\rho$ becomes $\gamma$

Rewriting the parameters, we get almost the same exact equation as presented in the last notation, except we now have a Delta $\Delta$ term at the start and end of the equation. Intuitively, the delta symbol has always meant change when studying physics – and it has the same meaning here, it’s just some rate of change for a parameter over a function $J$.

All the triangle delta means can be specified as a function $\text(\theta_t)$, which just specifies how much a parameter $\theta$ should change by. So when I tell you that I want you to add $\Delta \theta_$ at the end of the equation, it just means to take the last change to $\theta$, i.e. at the last time step $t-1$.

Now $\Delta \theta_t$ becomes our update, and we update our parameter accordingly:

It’s really that simple.

There is not much to say for pros and cons of the algorithm – perhaps there is not too much theory on the subject of the good and bad of momentum.

  • Faster convergence than traditional SGD
  • As the ball accelerates down the hill, how do we know that we don’t miss the local minima? If the momentum is too much, we will most likely miss the local minima, rolling past it, but then rolling backwards, missing it again. If the momentum is too much, we could just swing back and forward between the local minima.

Adaptive Moment Estimation (Adam) is the next optimizer, and probably also the optimizer that performs the best on average. Taking a big step forward from the SGD algorithm to explain Adam does require some explanation of some clever techniques from other algorithms adopted in Adam, as well as the unique approaches Adam brings.

Adam uses Momentum and Adaptive Learning Rates to converge faster. We have already explored what Momentum means, now we are going to explore what adaptive learning rates means.

Comparison of many optimizers. Credits to Ridlo Rahman

Adaptive Learning Rate

An adaptive learning rate can be observed in AdaGrad, AdaDelta, RMSprop and Adam, but I will only go into AdaGrad and RMSprop, as they seem to be the relevant one’s (although AdaDelta has the same update rule as RMSprop). The adaptive learning rate property is also known as Learning Rate Schedules, which I found an insightful Medium post for.

So, what is it? I found that the best way is explaining a property from AdaGrad first, and then adding a property from RMSprop. This will be sufficient to show you what adaptive learning rate means and provides.

Part of the intuition for adaptive learning rates, is that we start off with big steps and finish with small steps – almost like mini-golf. We are then allowed to move faster initially. As the learning rate decays, we take smaller and smaller steps, allowing us to converge faster, since we don’t overstep the local minimum with as big steps.

AdaGrad: Parameters Gets Different Learning Rates

Adaptive Gradients (AdaGrad) provides us with a simple approach, for changing the learning rate over time. This is important for adapting to the differences in datasets, since we can get small or large updates, according to how the learning rate is defined.

Let’s go for a top to bottom approach; here is the equation:

All we added here is division of the learning rate eta $\eta$. Although I told you that $\epsilon$ sometimes is the learning rate, in this algorithm it is not. In fact, it’s just a small value that ensures that we don’t divide by zero.

What needs explaining here is the term $\sqrt<\sum_<\tau=1>^\left( \nabla J(\theta_<\tau,i>) \right) ^2>$, i.e. the square root of the summation $\sum$ over all gradients squared. We sum over all the gradients, from time step $\tau=1$ all the way to the current time step $t$.

If $t=3$, then we would sum over the gradient at $t=1$, $t=2$ and $t=3$, and this just scales as $t$ becomes larger. Eventually, though, the gradients might be so small, that the momentum becomes stale, i.e. it updates with very small values.

Let me just make an example here, denoting the gradient by $g$ under the square root, i.e. $g(\theta_<3,i>)^2 = (\nabla J(\theta_<3,i>))^2$

What effect does this has on the learning rate $\eta$? Well, division by bigger and bigger numbers means that the learning rate is decreasing over time – hence the term adaptive learning rate.

We could in simple terms say, that the sum $\sum$ increases over time, as we add more gradients over time:

Overview of how the sum grows, as $t$ gets larger.


Root Mean Squared Propagation (RMSprop) is very close to Adagrad, except for it does not provide the sum of the gradients, but instead an exponentially decaying average. This decaying average is realized through combining the Momentum algorithm and Adagrad algorithm, with a new term.

An important property of RMSprop is that we are not restricted to just the sum of the past gradients, but instead we are more restricted to gradients for the recent time steps. This means that RMSprop changes the learning rate slower than Adagrad, but still reaps the benefits of converging relatively fast – as has been shown (and we won’t go into those details here).

Doing the top to bottom approach again, let’s start out with the equation. By now, you should only be suspect of the expectation of the gradient $E[g^2]$:

This exact term is what causes the decaying average (also called running average or moving average). Let’s examine it, with relation to the momentum algorithm presented earlier.

We still have our momentum term $\gamma=0.9$. We can immediately see that the new term $E$ is similar to $v_t$ from Momentum; the differences is that $E$ has no learning rate in the equation, while it has added a new term $(1-\gamma)$ in front of the gradient $g$. Note that summation $\sum$ is not used here, since it would involve a more complex equation. I tried to convert it, but got stuck because of the new term, hence I found it’s not worth it to try and express it with a summation sign.

With the AdaGrad algorithm, the learning rate $\eta$ was monotonously decreasing, while in RMSprop, $\eta$ can adapt up and down in value, as we step further down the hill for each epoch. This concludes adaptive learning rate, where we explored two ways of making the learning rate adapt over time. This property of adaptive learning rate is also in the Adam optimizer, and you will probably find that Adam is easy to understand now, given the prior explanations of other algorithms in this post.

Andrew Ng compares Momentum to RMSprop in a brilliant video on YouTube

Momentum (blue) and RMSprop (green) convergence. We see that RMSprop is faster.

Actually Explaining Adam

Now we have learned all these other algorithms, and for what? Well, to be able able to explain Adam, such that it’s easier to understand. By now, you should know what Momentum and Adaptive Learning Rate means.

There are a lot of terms to watch out for in the original paper, and it might seem confusing at first.

Adam algorithm in one picture in pseudo code. Taken from the original Adam paper.

But let’s just paint it in a simplistic way; here is the update rule for Adam

Immediately, we can see that there are a bunch of numbers and things to keep track of. Most of these have already been explained, but for the sake of clarity, let’s state each term here:

  • Epsilon $\epsilon$, which is just a small term preventing division by zero. This term is usually $10^<-8>$.
  • Learning rate $\eta$ (although it’s $\alpha$) in the paper. They explain that a good default setting is $\eta=0.001$, which is also the default learning rate in Keras.
  • The gradient $g$, which is still the same thing as before: $g = \nabla J(\theta_)$

We also have two decay terms, also called the exponential decay rates in the paper. The terms are close to $\gamma$ in RMSprop and Momentum, but instead of 1 term, we have two terms called beta 1 and beta 2:

  • First momentum term $\beta_1=0.9$
  • Second momentum term $\beta_2=0.999$

Although these terms are without the time step $t$, we would just take the value of $t$ and put it in the exponent, i.e. if $t=5$, then $\beta_1^=0.9^5=0.59049$.

A Final Note

The likes of RAdam and Lookahead were considered, along with a combination of the two, called Ranger, but ultimately left out. They are acclaimed SOTA optimizers by a bunch of Medium posts, though they stand unproven. A future post could include these “SOTA” optimizers, to explain the difference from Adam, and why that might be useful.

Anyone getting into deep learning will probably get the best and most consistent results using Adam, as that has already been out there and shown that it performs the best.

If you want to visualize optimizers, I found this notebook to be a great resource, using optimizers from TensorFlow.

Further Reading

Here are the further readings for this post. Currently, it’s only the papers being referenced here.

Криптовалюта Theta Token – новый этап в распределении видеоконтента

Видеоконтент набирает высокие обороты и считается наиболее удобным и популярным среди пользователей. Большую часть всей информации в интернете (около 67%) смотрят в видеофайлах. И эта многомиллиардная индустрия достаточно быстро расширяется.

А вот существующая структура, которая распределяет видеоконтент, не совсем удовлетворяет пользователей: она приводит к медленным показателям загрузки, низкому разрешению контента, высоким затратам на доставку и в итоге ограничивает поток доходов обратно создателям видеоконтента.

С развитием технологий, видео подается с более высокой четкостью и большим расширением, поэтому ситуация еще больше может усложниться. Theta Token будет революционным решением того, как видео сможет распространяться по сети, используя уникальный децентрализованный характер блочной цепи.

Для чего нужен Theta Token

Предназначение Theta можно объяснить в паре предложений. Команда проекта работает над тем, чтобы улучшить онлайн-видеопромышленность самым действенным образом. Сегодня передача видеоматериала в разные части мира – не самое дешевое занятие. Theta Network будет организована, как сквозная децентрализованная система для потоковой поставки видео, причем с минимальными затратами.

Theta Labs, дочерняя компания, стремится к выпуску первоначального предложения монеты, которое будет способствовать созданию децентрализованной одноранговой сети.

Команда проекта

Платформу потокового видео поддерживает команда, члены которой имеют более 30 лет опыта именно в потоковом видеопространстве.

Руководство Theta состоит из 4 специалистов:

  1. Митч Лю (CEO) – BS в области компьютерных наук и инженерии из MIT.
  2. Jieyi Long (технический директор) – разработал несколько запатентованных технологий, в том числе потоковое видео VR, а также мгновенные повторы для видеоигр.
  3. Райан Николс (главный специалист по продуктам) – хорошо разбирается в разработках и запуске виртуальных валютных систем для различных платформ.
  4. Риз Вирк (руководитель отдела развития корпорации) – ранний инвестор в компаниях, занимающихся криптовалютой и блокчейном.

Также имеется рабочая группа:

Особенности проекта

Платформа Theta создана специально для децентрализации потоковых служб. Он работает с открытым исходным кодом, что указывает на то, что проект открыт для всех разработчиков и партнеров.

Theta Token, внутренняя криптовалюта Theta Network, интегрированная и поддерживаемая некоторыми из наиболее широко используемых торговых платформ на криптовалютном рынке, таких как Huobi.

Платформа строит распределенную поточную сеть, на которой функции токенов будHuobiут служить механизмом вознаграждения и стимулирования, чтобы побудить все заинтересованные стороны использовать DSN.

Это уникальное комплексное решение для децентрализованного потокового видео в реальном времени. Он обеспечивает технические стимулы для всех пользователей, участвующих в сети. Чем больше людей сможет присоединиться к системе, тем больше людей смогут использовать Theta Token.

Вот некоторые особенности, которые делают Theta лучшей платформой для потокового видеопотока:

  • комплексная децентрализованная доставка видео;
  • поддержка Dapps для развлечений, эфиров, фильмов, предприятий, конференций, обучения и др.;
  • Theta Blockchain – протокол и совместимый с ERC20 токен;
  • существующая платформа с 1M MAU.

Запуск Тета-токена был очень ожидаемым из-за участия сооснователя Youtube Стива Чена и соучредителя Twitch Джастина Кан в качестве советников. Обе платформы, которые оцениваются в миллиарды долларов после приобретения Google и Amazon, находятся на конечной стороне поставки видео для создателей цифрового контента, геймеров, блогеров, музыкантов и т. д.

Тета также инвестируется многими компаниями-гигантами в технологическом секторе, в первую очередь Samsung и Sony.

Theta Network ICO

Первоначальное предложение монеты Theta Network ограничено 600 000 000 токенов. Начальная цена покупки – 0,15 доллара США или эквивалент в ETH.

Распределение токенов Theta, созданных для ICO, будет иметь такие пропорции:

  • 50% для продажи в ICO;
  • 30% зарезервированы Theta Labs;
  • 10% будут выделены на оперативные расходы сети;
  • 10% зарезервированы для партнеров, советников и консультантов.

Проблемы видеоиндустрии

Создатели цифрового контента сталкиваются с двумя главными проблемами:

  • распределение доходов от рекламы;
  • неэффективные методы доставки видео.

Неэффективность централизованных сетей доставки контента (CDN) приводит к постоянным паузам, пропускам, видеопередаче низкого качества и минимальному охвату, уменьшая потенциальное качество видеороликов.

Theta основана на блокчейне Ethereum, все данные сети доставляются децентрализованным и одноуровневым способом. Это сводит к минимуму паузы, пропуски и максимизирует доставку видео.

Структура видеоиндустрии все больше нуждается в платформах для качественного обмена видеоматериалами из-за экспоненциального роста рынка распространения контента за последние несколько лет.

В отношении распределения доходов от рекламы все тоже не без проблем. Сегодня многие создатели контента обвиняют YouTube в не совсем прозрачной модели распределения доходов и критикуют ее алгоритм демонизации.

Как все работает

Первым децентрализованным приложением, которое будет создано на блокчейне Theta, будет В результате его запуска появятся миллионы зрителей.

Основным принципом Theta Network является то, что любой пользователь сети, который имеет неиспользованную пропускную способность или вычислительную мощность, сможет кэшировать и передавать потоки другим участникам сети. Зрители во всем мире смогут предоставлять «узлы кеширования» посредством своего ПК, благодаря которым они будут помогать формировать глобальную инфраструктуру доставки видео.

Чтобы зрители имели стимул вносить свои ресурсы памяти и пропускной способности в экосистему, разработан протокол Theta в качестве механизма стимулирования. Узлы кэширования будут получать криптографический токен Theta Network за то, что они передают видеопотоки другим зрителям. Тотальные маркеры не только будут побуждать зрителей присоединяться к сети в качестве узлов кеширования, но также смогут значительно улучшить эффективность потокового рынка и упростить процесс доставки видео.

В сети Theta рекламодатели могут напрямую нацеливаться на зрителей по более низкой цене, а зрители зарабатывать токены Theta за свое внимание и взаимодействие с видеопотоками.

Такая потоковая архитектура чрезвычайно эффективна в географических регионах, где интернет-соединения нестабильны или ненадежны. Отдельные лица таких регионов получают возможность генерировать пассивный доход просто благодаря своей неиспользуемой полосе пропускания.

Потоковая архитектура

Одноранговый характер сети Theta Network обеспечит гораздо более высокое качество потоковой передачи. И это непременно отразиться на размере общих затрат для распространения контента, что в итоге приведет к тому, что создатели смогут выпускать на рынок оригинальный контент с меньшей стоимостью для потребителей.

Theta Network построена на трех концепциях:

  1. Доказательство взаимодействия – протокол, который доказывает законность потребления видеоконтента зрителями и то, что они действительно обеспечивают лучшую прозрачность рекламодателям.
  2. Зависимый майнинг – награда за блок узлам кеширования не будет постоянной. Ее размер будет определяться в зависимости от оценки репутации.
  3. Глобальная репутация – еще один консенсусный механизм, который будет работать в отношении глобальных показателей оценки репутации.

Когда узел кэширования запускает новый блок Theta, он вычисляет показатель репутации для себя, что является мерой объема ретрансляции видеопотока.

Высокая репутация увеличивает вознаграждение блока, что побуждает операторов узлов передавать больше видео.

Блок-схема Theta также использует глобальный репутационный консенсус – когда новый блок закрывается подписью, все узлы проверяют репутацию узла, который создает блок, и формируют глобальный консенсус в отношении размера вознаграждения.


Главное, что должен иметь пользователь для добычи криптовалюты – это персональный компьютер (ПК).

Запуская и просматривая свой любимый видеоконтент, вы увидите небольшое клиентское приложение, которое также можно найти и в самом браузере. Так как в данный момент времени вы являетесь конечным пользователем, то можете выполнять кэширование и ретрансляцию видеопотоков другим соседним пользователям или любым другим узлам в сети. Таким образом, вы сможете зарабатывать токены Theta.

Для большего заработка, ПК можно оставлять в рабочем состоянии на ночь. Но имейте в виду, что для того, чтобы он запустил процесс добычи в ваше отсутствие, ваш графический или центральный процессор должны будут иметь необходимую пропускную способность. Каждым утром вы будете видеть, какой размер заработка принесла вам прошлая ночь.

Theta в цифрах

  • стоимость 1 THETA – $0,106337;
  • рыночная капитализация – $70 608 001 (106 место);
  • средний показатель торговых операций за 24 часа – $4 723 947;
  • всего в обращении – 664 002 689 THETA.

График стоимости криптовалюты Theta Token по отношению к биткоину и американскому доллару:

Где можно купить и хранить токены

THETA торгуется на 6 криптовалютных биржах:

В основном токен можно обменять на ETH, BTC и USDT.

Так как THETA – это ERC20-токен, то хранить его можно на кошельках для Эфира. Собственного хранилища у криптовалюты пока нет.

Дорожная карта

Дорожная карта проекта имеет 4 фазы развития. На данный момент платформа находится на 3 этапе «Среда тестирования Sandbox», который имеет такие планы:

  1. Полная начальная реализация блок-цепи PoS.
  2. Поддержка нескольких валидаторов в разных географических регионах.
  3. Проведение тестов масштабируемости.
  4. Внедрение протокола транзакций вне сети.

Подготовка тестовой сети:

  1. Публичная интерактивная демонстрация.
  2. Пользователи сообщества смогут моделировать узлы в кэше протокола Theta.
  3. Blockchain-проводник.
  4. Функциональность: просмотр истории блоков, истории транзакций, адресов пользователей.

Развертывание тестовой сети:

  1. Запуск Testnet.
  2. Запуск тестовых клонов для ранних партнеров по разработке.
  3. Проведение обзора безопасности: разработка протокола, криптографический дизайн, сторонняя интеграция.
  4. Сетчатая потоковая передача.
  5. Разработка алгоритмов маршрутизации пакетов, методов формирования групп сверстников, методов вытягивания потоков.
  6. Разработка механизмов отслеживания трафика и ограничения скорости.
  7. Создать выделенный тестовый канал на платформе
  8. Реализовать протокол взаимодействия пользователей.

Запуск клиента кошелька:

  • Защищенный многоплатформенный кошелек для транзакции валюты, связанной с протоколом Theta.


Theta Network – один из самых крупных недавних проектов, базирующихся на технологии блокчейна и поддерживаемый самыми крупными инвесторами:

Проект безусловно имеет большой потенциал. Идея создания DSN (децентрализованная потоковая сеть) потенциально может быть революционной и полностью отодвинуть CDN (сеть доставки контента).

  • зрители смогут зарабатывать награды в виде токенов Theta;
  • высокое качество, плавное потоковое видео;
  • снижение стоимости доставки видеопотоков.

Если сеть сможет стать успешной и востребованной, она привлечет к себе самых больших владельцев контента, таких как Amazon, CNN, Netflix и т. д. Возможно, Theta действительно сможет радикально изменить мир потокового видео.

Подписывайтесь на наш Telegram канал. Будьте в курсе новых статей.

Best Binary Options Brokers 2020:
  • Binarium

    The Best Binary Options Broker 2020!
    Perfect Choice For Beginners!
    Free Demo Account!
    Free Trading Education!
    Get Your Sing-Up Bonus Now!

  • Binomo

    Only For Experienced Traders!

Like this post? Please share to your friends:
Binary Options Trading For Beginners
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: