Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:iot-reloaded:regression_models [2024/12/02 17:13] – [Piecewise linear models] ktokarzen:iot-reloaded:regression_models [2024/12/10 21:33] (current) pczekalski
Line 15: Line 15:
  
 <figure Galton's_data_set> <figure Galton's_data_set>
-{{ :en:iot-reloaded:galton.png?600 | Galton'data set}} +{{ :en:iot-reloaded:galton.png?600 | Galton'Data Set}} 
-<caption>Galton'data set</caption>+<caption>Galton'Data Set</caption>
 </figure> </figure>
  
Line 22: Line 22:
  
 <figure Linear model 1> <figure Linear model 1>
-{{ :en:iot-reloaded:lineareq1.png?200 | Linear model}} +{{ :en:iot-reloaded:lineareq1.png?200 | Linear Model}} 
-<caption>Linear model</caption>+<caption>Linear Model</caption>
 </figure> </figure>
  
Line 32: Line 32:
   * β0 and β1 y axis crossing and slope coefficients of the linear function correspondingly   * β0 and β1 y axis crossing and slope coefficients of the linear function correspondingly
  
-Unfortunately, in the context of the given example, finding such a function is not possible for all x-y pairs at once since x and y values differ from pair to pair. However, finding a linear function that minimises the distance of the given y to the y' produced by the function or model for all x-y pairs is possible. In this case, y' is an estimated or forecasted y value. At the same time, the distance between each y-y' pair is called an error. Since the error might be positive or negative, a squared error is used to estimate the error. +Unfortunately, in the context of the given example, finding such a function is not possible for all x-y pairs at once since x and y values differ from pair to pair. However, finding a linear function that minimises the distance of the given y to the y' produced by the function or model for all x-y pairs is possible. In this case, y' is an estimated or forecasted y value. At the same time, the distance between each y-y' pair is called an error. Since the error might be positive or negative, a squared error estimates the error. 
 It means that the following equation might describe the model: It means that the following equation might describe the model:
  
 <figure Linear model 2> <figure Linear model 2>
-{{ :en:iot-reloaded:lineareq2.png?200 | Linear model}} +{{ :en:iot-reloaded:lineareq2.png?200 | Linear Model with Estimated Coefficients}} 
-<caption>Linear model with estimated coefficients</caption>+<caption>Linear Model with Estimated Coefficients</caption>
 </figure> </figure>
  
Line 46: Line 46:
  
 <figure Model error> <figure Model error>
-{{ :en:iot-reloaded:lineareq3.png?400 | Model error}} +{{ :en:iot-reloaded:lineareq3.png?400 | Model Error}} 
-<caption>Model error</caption>+<caption>Model Error</caption>
 </figure> </figure>
  
Line 53: Line 53:
  
 <figure Coefficient velues> <figure Coefficient velues>
-{{ :en:iot-reloaded:lineareq4.png?200 | Coefficient values}} +{{ :en:iot-reloaded:lineareq4.png?200 | Coefficient Values}} 
-<caption>Coefficient values</caption>+<caption>Coefficient Values</caption>
 </figure> </figure>
  
Line 65: Line 65:
  
 <figure Galton's_data_set_with_model> <figure Galton's_data_set_with_model>
-{{ :en:iot-reloaded:galtonmodel.png?600 | Galton'data set}} +{{ :en:iot-reloaded:galtonmodel.png?600 | Galton'Data Set with Linear Model}} 
-<caption>Galton'data set with linear model</caption>+<caption>Galton'Data Set with Linear Model</caption>
 </figure> </figure>
  
Line 74: Line 74:
  
 <figure Coefficient velues> <figure Coefficient velues>
-{{ :en:iot-reloaded:lineareq5.png?400 | Coefficient values}} +{{ :en:iot-reloaded:lineareq5.png?400 | Coefficient Values}} 
-<caption>Coefficient values</caption>+<caption>Coefficient Values</caption>
 </figure> </figure>
  
Line 83: Line 83:
   * ei - error  of the model's ith output   * ei - error  of the model's ith output
  
-Since an error for a given yith might be positive or negative and the model itself minimises the overall error, one might expect that the error is normally distributed around the model, with a mean value of 0 and its sum close to or equal to 0. Examples of the error for a few randomly selected data points are depicted in the following figure {{ref>Galton's_data_set_errors}} in red colour:+Since an error for a given yith might be positive or negative and the model itself minimises the overall error, one might expect that the error is typically distributed around the model, with a mean value of 0 and its sum close to or equal to 0. Examples of the error for a few randomly selected data points are depicted in the following figure {{ref>Galton's_data_set_errors}} in red colour:
  
 <figure Galton's_data_set_errors> <figure Galton's_data_set_errors>
-{{ :en:iot-reloaded:galtonmodelerrors.png?600 | Galton'data set}} +{{ :en:iot-reloaded:galtonmodelerrors.png?600 | Galton'Data Set with the Linear Model and its Errors}} 
-<caption>Galton'data set with the linear model and its errors</caption>+<caption>Galton'Data Set with the Linear Model and its Errors</caption>
 </figure> </figure>
  
Line 94: Line 94:
  
 <figure Error_distribution_example> <figure Error_distribution_example>
-{{ :en:iot-reloaded:errors_1.png?600 | Error distribution example}} +{{ :en:iot-reloaded:errors_1.png?600 | Error Distribution Example}} 
-<caption>Error distribution example</caption>+<caption>Error Distribution Example</caption>
 </figure> </figure>
  
Line 101: Line 101:
  
 <figure Error_distribution_example2> <figure Error_distribution_example2>
-{{ :en:iot-reloaded:errors_2.png?600 | Error distribution example}} +{{ :en:iot-reloaded:errors_2.png?600 | Error Distribution Example}} 
-<caption>Error distribution example</caption>+<caption>Error Distribution Example</caption>
 </figure> </figure>
  
 From this discussion, a few essential notes have to be taken: From this discussion, a few essential notes have to be taken:
   * Error distributions (around 0) should be treated as carefully as the models themselves;   * Error distributions (around 0) should be treated as carefully as the models themselves;
-  * In most cases, error distribution is hard to notice even if the errors are illustrated;+  * In most cases, error distribution is complex to notice even if the errors are illustrated;
   * It is essential to look into the distribution to ensure that there are no regularities.    * It is essential to look into the distribution to ensure that there are no regularities. 
 If any regularities are noticed, whether a simple variance increase or cyclic nature, they point to something the model does not consider. It might point to a lack of data, i.e., other factors that influence the modelled process, but they are not part of the model, which is therefore exposed through the nature of the error distribution. It also might point to an oversimplified look at the problem, and more complex models should be considered. In any of the mentioned cases, a deeper analysis should be considered.  If any regularities are noticed, whether a simple variance increase or cyclic nature, they point to something the model does not consider. It might point to a lack of data, i.e., other factors that influence the modelled process, but they are not part of the model, which is therefore exposed through the nature of the error distribution. It also might point to an oversimplified look at the problem, and more complex models should be considered. In any of the mentioned cases, a deeper analysis should be considered. 
Line 113: Line 113:
  
 <figure Linear model> <figure Linear model>
-{{ :en:iot-reloaded:lineareq6.png?400 | Linear model}} +{{ :en:iot-reloaded:lineareq6.png?400 |General Notation of a Linear Model}} 
-<caption>General notation of a linear model</caption>+<caption>General Notation of a Linear Model</caption>
 </figure> </figure>
  
-Here, the error is considered to be normally distributed around 0, with its standard deviation sigma and variance sigma squared. Variance provides at least a numerical insight into the error distribution; therefore, it should be considered as an indicator for further analysis. Unfortunately, the true value of sigma is not known; therefore, its estimated value should be used:+Here, the error is considered to be normally distributed around 0, with its standard deviation sigma and variance sigma squared. Variance provides at least a numerical insight into the error distribution; therefore, it should be considered an indicator for further analysis. Unfortunately, the true value of sigma is not known; thus, its estimated value should be used:
  
 <figure Sigma> <figure Sigma>
-{{ :en:iot-reloaded:lineareq7.png?300 | Sigma estimate}} +{{ :en:iot-reloaded:lineareq7.png?300 | Sigma Estimate}} 
-<caption>Sigma estimate</caption>+<caption>Sigma Estimate</caption>
 </figure> </figure>
  
Line 127: Line 127:
  
 <figure Variance> <figure Variance>
-{{ :en:iot-reloaded:lineareq8.png?200 | Variance estimate}} +{{ :en:iot-reloaded:lineareq8.png?200 | Variance Estimate}} 
-<caption>Variance estimate</caption>+<caption>Variance Estimate</caption>
 </figure> </figure>
  
Line 134: Line 134:
 ===== Multiple linear regression ===== ===== Multiple linear regression =====
  
-In many practical problems, the target variable Y might depend on more than one independent variable X, for instance, wine quality, which depends on its level of serenity, amount of sugars, acidity and other factors. In the case of applying a linear regression model, it seems much complicated, but it is still a linear model of  the following form:+In many practical problems, the target variable Y might depend on more than one independent variable X, for instance, wine quality, which depends on its level of serenity, amount of sugars, acidity and other factors. In the case of applying a linear regression model that doesn't seem very easy, but it is still a linear model of the following form:
  
 <figure Multiple linear model> <figure Multiple linear model>
-{{ :en:iot-reloaded:lineareq9.png?600 | Multiple linear model}} +{{ :en:iot-reloaded:lineareq9.png?600 | Multiple Linear Model}} 
-<caption>Multiple linear model</caption>+<caption>Multiple Linear Model</caption>
 </figure> </figure>
  
Line 144: Line 144:
  
 <figure Multiple linear model error estimate> <figure Multiple linear model error estimate>
-{{ :en:iot-reloaded:lineareq10.png?400 | Multiple linear model error estimate}} +{{ :en:iot-reloaded:lineareq10.png?400 | Multiple Linear Model Error Estimate}} 
-<caption>Multiple linear model error estimate</caption>+<caption>Multiple Linear Model Error Estimate</caption>
 </figure> </figure>
  
-Unfortunately, the results of multiple linear regression cannot be visualised in the same way as for a single linear regression due to the number of factors (dimensions). Therefore, numerical analysis and interpretation of the model should be done. In many situations, numerical analysis is complicated and requires a semantic interpretation of the data and model. To do it, visualisations reflecting the relation between the dependent variable and independent variables result in multiple graphs. Otherwise, the quality of the model is hardly assessable or even unassessable. +Unfortunately, due to the number of factors (dimensions), the results of multiple linear regression cannot be visualised in the same way as those of a single linear regression. Therefore, numerical analysis and interpretation of the model should be done. In many situations, numerical analysis is complicated and requires a semantic interpretation of the data and model. To do it, visualisations reflecting the relation between the dependent variable and independent variables result in multiple graphs. Otherwise, the quality of the model is hardly assessable or even unassessable. 
  
 ===== Piecewise linear models ===== ===== Piecewise linear models =====
Line 155: Line 155:
  
 <figure Piecewise linear model> <figure Piecewise linear model>
-{{ :en:iot-reloaded:lineareq11.png?400 | Piecewise linear model}} +{{ :en:iot-reloaded:lineareq11.png?400 | Piecewise Linear Model}} 
-<caption>Piecewise linear model</caption>+<caption>Piecewise Linear Model</caption>
 </figure> </figure>
  
Line 163: Line 163:
  
 <figure Complex_data_example> <figure Complex_data_example>
-{{ :en:iot-reloaded:complexdata.png?600 | Complex data example}} +{{ :en:iot-reloaded:complexdata.png?600 | Complex Data Example}} 
-<caption>Complex data example</caption>+<caption>Complex Data Example</caption>
 </figure> </figure>
  
Line 170: Line 170:
  
 <figure Piecewise_linear_model_two> <figure Piecewise_linear_model_two>
-{{ :en:iot-reloaded:complexdata_2pieces.png?600 | Piecewise linear model}} +{{ :en:iot-reloaded:complexdata_2pieces.png?600 | Piecewise Linear Model with 2 Splits}} 
-<caption>Piecewise linear model with 2 splits</caption>+<caption>Piecewise Linear Model with 2 Splits</caption>
 </figure> </figure>
  
Line 177: Line 177:
  
 <figure Piecewise_linear_model_many> <figure Piecewise_linear_model_many>
-{{ :en:iot-reloaded:complexdata_npieces.png?600 | Piecewise linear model}} +{{ :en:iot-reloaded:complexdata_npieces.png?600 | Piecewise Linear Model with Many Splits}} 
-<caption>Piecewise linear model with many splits</caption>+<caption>Piecewise Linear Model with Many Splits</caption>
 </figure> </figure>
  
en/iot-reloaded/regression_models.1733159629.txt.gz · Last modified: 2024/12/02 17:13 by ktokarz
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0