First, note that the smallest L2-norm vector that can fit the training data for the core model is \(>=[2,0,0]\)

Nội dung bài viết:

First, note that the smallest L2-norm vector that can fit the training data for the core model is \(<\theta^\text<-s>>=[2,0,0]\)

On the other hand, in the presence of the spurious feature, the full model can fit the training data perfectly with a smaller norm by assigning weight \(1\) for the feature \(s\) (\(|<\theta^\text<-s>>|_2^2 = 4\) while \(|<\theta^\text<+s>>|_2^2 + w^2 = 2 < 4\)).

Generally, in the overparameterized regime, since the number of training examples is less than the number of features, there are some directions of data variation that are not observed in the training data. In this example, we do not observe any information about the second and third features. However, the non-zero weight for the spurious feature leads to a different assumption for the unseen directions. In particular, the full model does not assign weight \(0\) to the unseen directions. Indeed, by substituting \(s\) with \(<\beta^\star>^\top z\), we can view the full model as not using \(s\) but implicitly assigning weight \(\beta^\star_2=2\) to the second feature and \(\beta^\star_3=-2\) to the third feature (unseen directions at training).

Within this example, deleting \(s\) reduces the error getting an examination shipping with high deviations regarding no with the 2nd ability, while deleting \(s\) boosts the mistake for a test shipments with high deviations from zero into 3rd feature.

Drop in accuracy in test time depends on the relationship between the true target parameter (\(\theta^\star\)) and the true spurious feature parameters (\(<\beta^\star>\)) in the seen directions and unseen direction

As we saw in the previous example, by using the spurious feature, the full model incorporates \(<\beta^\star>\) into its estimate. The true target parameter (\(\theta^\star\)) and the true spurious feature parameters (\(<\beta^\star>\)) agree on some of the unseen directions and do not agree on the others. Thus, depending on which unseen directions are weighted heavily in the test time, removing \(s\) can increase or decrease the error.

More formally, the weight assigned to the spurious feature is proportional to the projection of \(\theta^\star\) on \(<\beta^\star>\) on the seen directions. If this North Charleston SC escort review number is close to the projection of \(\theta^\star\) on \(<\beta^\star>\) on the unseen directions (in comparison to 0), removing \(s\) increases the error, and it decreases the error otherwise. Note that since we are assuming noiseless linear regression and choose models that fit training data, the model predicts perfectly in the seen directions and only variations in unseen directions contribute to the error.

(Left) New projection regarding \(\theta^\star\) on the \(\beta^\star\) are self-confident regarding seen assistance, however it is negative in the unseen assistance; therefore, removing \(s\) decreases the mistake. (Right) The latest projection out-of \(\theta^\star\) toward \(\beta^\star\) is similar in both seen and unseen advice; for this reason, deleting \(s\) boosts the error.

Let’s now formalize the conditions under which removing the spurious feature (\(s\)) increases the error. Let \(\Pi = Z(ZZ^\top)^<-1>Z\) denote the column space of training data (seen directions), thus \(I-\Pi\) denotes the null space of training data (unseen direction). The below equation determines when removing the spurious feature decreases the error.

The new center design assigns weight \(0\) to your unseen information (pounds \(0\) towards second and third has contained in this example)

The brand new leftover front ‘s the difference in the fresh projection from \(\theta^\star\) into the \(\beta^\star\) on seen guidelines along with their projection on unseen guidelines scaled of the test date covariance. Suitable front ‘s the difference in 0 (i.age., staying away from spurious has) plus the projection regarding \(\theta^\star\) for the \(\beta^\star\) regarding unseen advice scaled because of the shot go out covariance. Removing \(s\) facilitate in the event the remaining side is actually greater than the right top.

As theory is applicable only to linear patterns, we now demonstrate that when you look at the low-linear patterns taught towards real-industry datasets, deleting a spurious feature decreases the precision and you can influences groups disproportionately.

Rate this post
XEM THÊM:  Birds end up being sexually adult on 165-170 times of age

DỊCH VỤ CỦA CHÚNG TÔI【XEM NGAY】

DỊCH VỤ BACKLINK SOCIAL

Dịch vụ Backlink Social UY TÍN - CHẤT LƯỢNG. Số lượng Social lớn 【500 Trang Báo】. Ưu đãi hấp dẫn, Tư vấn hỗ trợ SEO hiệu quả.

DỊCH VỤ BOOK BÁO PR

Dịch vụ Book Báo PR với 【>300 Trang Báo】bao gồm: Dân trí, Vnexpress, 24h, Cafef. Chúng tôi có nhiều kinh nghiệm, chiết khấu lớn.

DỊCH VỤ BACKLINK BÁO

Backlink Báo số lượng lớn 【4O báo】, thúc đẩy TOP nhanh chóng. Hàng nghìn baclink báo chất lượng Dofollow tạo trend tốt cho Google.

DỊCH VỤ GUEST POST

【300 Guest Post】khác nhau cho đầy đủ các lĩnh vực: Bất động sản, sức khỏe, công nghệ thông tin... Guest Post đảm bảo chất lượng, Traffic lớn.