Whether you are conducting your own analysis or reading results prepared or presented by someone else, you need to understand what story the results are trying to tell. When reading a prepared article or attending a presentation, these will typically be in nicely formatted tables. However! The contents of the formatted table can depend on the author's field of study and style preferences. An identical regression with identical data and identical results may be presented in one of several ways. As such, I'll present a raw version of the results directly from statistical software (Stata, in this instance), as it contains examples of all common presentation methods. Thankfully, this means that the table below contains some redundant information. It also contains some additional information that most users won't need and that we can ignore for now.
This example uses entirely fake data. Suppose that we are studying whether wealthy individuals tend to receive lighter prison sentences for the same crime than low-income folks. To study this, we might collect income and sentence length data from 500 randomly selected individuals who all committed the same offense. We might then run a simple regression with the sentence length (in days) as the dependent variable and the person's income (in thousands of dollars per year) as the independent variable. The raw results might look something like the table below.

This version was created in Stata, but other statistical software would present similar information with exactly the same results. It presents quite a lot of information and can seem overwhelming at first, but we often don't need most of it. Below, I've broken the table into three rectangles. The bottom rectangle is the most important for us, as it contains our estimates and information we can use to determine whether the results are statistically significant or not. The top-right rectangle contains some information about how well the model performed, only two bits of which we'll need right now. Finally, the rectangle on the top-left contains some further information about model performance, but is rarely needed. In my many years of working with data, I don't think that I've encountered a need for these outside of writing exam questions for Econometrics students. As such, we're safe to ignore the top-left for now.

Starting with the bottom rectangle, we need to define each of the components:
As for the rectangle on the top-right, there are only two numbers that we need to discuss right now:
NOTE: Like with correlation coefficients, there are no such things as "good" or "bad" values of R-squared. We do prefer higher values over lower values, but it's all relative. Our example value of 0.026 looks small, but could still be "good" if no other model can do better. Likewise a value of 0.947 could still be "bad" if it's lower than we could have obtained from other models.
Now that we've discussed what the parts of the table mean, we can move to interpreting the results from our example. The part that we typically care most about is the coefficient on income and whether or not it is statistically significant or not. It is the slope of our estimated regression line and represents the estimated effect that income has on sentence length. The generic way to interpret a regression coefficient is:
On average, a one unit increase in X is associated with a (coefficient on X) unit increase in Y, all else being equal.
It may seem a bit awkward in this generic form, but will make more sense when we get to our example. Before that though, we need to pay attention to three key parts of this:
Now let's return to interpreting the model in our example:

Coefficient:
P > |t| (the p-value):
95% conf. interval (95% confidence interval)
Note that I didn't spend much time explaining the confidence intervals and skipped over the standard errors and t-statistics entirely. Why? Because for our purposes they're redundant. The t-statistic is calculated as (coefficient - 0) / (standard error). The p-value is calculated using the t-statistic, number of observations, and number of coefficients we're estimating (2 in this example, one for income and one for the constant). The confidence interval is based on the coefficient, standard error, number of observations, and t-critical value. Just having the coefficient and one of the other columns would be sufficient for us to detect whether the effect was statistically significant or not. If one says that the result is significant, the others will too. If one says that a result is not significant, the others will report the same.
Our software presents all of these measures as a matter of our convenience. Researchers in different fields (and more specifically their journals) have different preferences over what information should be reported. And they are just that: preferences. They are all equivalent and none of the options are definitively better than the others in all situations. When reading results in a paper, here are some ways you might identify which effects are statistically significant at the 5% level without the author directly telling you (which they usually will):
Again, in a technical sense, it does not matter which of these versions you use when presenting results. What does, unfortunately, matter is conforming with the norms of the field in which you work. To conclude this section on interpretation, I'll remind you that there is a difference between statistical significance and an effect being substantively meaningful (large enough to matter in practice). Our regression produced a statistically significant coefficient, but how does it translate into actual differences between people? For that, it can be helpful to plot our regression line over a scatterplot, like so:

This demonstrates that for a conviction on the same crime, a person earning $25,000 per year (the maximum in this particular sample) would be expected to receive a sentence an average of 20 days shorter than a person with no income? Is that a big effect or a small one? That's up to your ability to argue.