-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SEO Audit #80
Comments
https://www.drupal.org/docs/7/modules/metatag/howto-verify-a-site-on-google-webmaster-tools-d7-d8 might be helpful for establishing the google webmaster verification |
@tallgood suggested that we look into SEO for KEEP and PRISM more so i'm bumping this up in the milestone list. |
I installed SEO Checklist https://www.drupal.org/project/seo_checklist locally and checked off as many of the items as I thought (could be some that we have an equivalent for like Matomo Analytics instead of Google Analytics) and I was surprised at how low our score was. Some of the bigger changes that are outstanding on this list include:
Of course steps like building backlinks to our site is immensely valuable for better SEO, but this kind of depends on promotion and how good our content is. This is especially relevant when searching for some terms that match our site and other sites with an equal weight (title, description, anchor tags, etc) -- would there be ways to make our site's content any better than the pages that it competes with for those terms? Some of this is fairly difficult to fine-tune (that's why SEO experts get paid the big bucks), but there should be some things that we have improved upon in recent months that just have not yet been noticed by the algorithms. |
Yeah I can’t speak to the differences between simple sitemap and XML sitemap. Can you compare? Or look into how simple sitemap builds the sitemap to see if it’s done on cron? We do have the KEEP sitemap in robots.txt but not PRISM so we can add that. I’m pretty sure I’ve submitted the sitemaps to google but I’ll double check. I’ve never done it for other search engines. Would you look into that? Clean urls - does that mean no node ids in urls? Because we already use collection and item aliases but I know sometimes they prefer actual words. We could potentially generate another secondary alias with title components in it? I’m speculating here as I’m not sure the limitations of url aliases. |
We use google analytics instead of google tag manager but I’m fairly certain that’d be an easy swap if there are benefits to one over the other. I don’t know ’much about alternate hreflang, real-time seo, amp or search 404 so let’s look into those. We have seckit installed not security review so probably worth a comparison there. We are using advagg now as of a few weeks ago. Maybe as we get more of the handles switched over and KEEP and PRISM start being referenced more it’ll help. It’s probably worth reviewing our meta tag configurations as well. |
As for the comparison between xmlsitemap https://www.drupal.org/project/xmlsitemap and simple_sitemap https://www.drupal.org/project/simple_sitemap, this page really touches every reason -- https://gbyte.dev/blog/drupal8-seo-simple_sitemap-vs-xmlsitemap-differences ... the reason that I'd want to change to xmlsitemap https://www.drupal.org/project/xmlsitemap might be because it is now supported in D8+ and it has some great features - and the tradeoff might be that it is just a little more demanding of resources. Both modules make the sitemaps during CRON process, but there are some differences where I like how xmlsitemap makes sitemaps based on last-updated, it saves the xml as static files (no database lookup needed to serve it to the spiders), and the fact that it is able to send sitemaps to search engines. Another big thing is that xmlsitemap is used by 216,000+ sites and simple_sitemap is only used by 82,000+ sites (but to be fair, many of the sites are pre-D8). |
CleanURLs would not have any node id in the url and this change may require fixing anything that relied on the URL to derive the node id value (but EntityManager should have the underlying node id for any route). The value to SEO would be especially in cases where the search term is contained in the URL we publish to search engines. There would have to be a 301 redirect and I know that the system takes care of this -- when navigating to "node/23" for example, the redirect may take you to "items/how-cats-instinctively-raise-kittens" -- but if you search for "how do cats know how to raise kittens" this page would certainly rank above another site that used "node/23" for that same content. The tradeoff is that a table lookup is always needed to serve up the correct node. |
…ing for dev, keep, and prod which could individually be enabled for #80
sounds like its worth testing out xmlsitemap |
Both Simple XML Sitemap and XMLSitemap have a submodule that geared toward submitting to search engines. They both only include Bing and Google (xmlsitemap has settings to allow for max interval and whether or not to submit unless changed). |
… and would need a controller redirect to include 'search_api_fulltext=' for #80
my current VM is having search view problems and never seems to return any results, but a clone of this page (as /search2) seems to work, so I will tinker with a duplicate page to see if it handles the route to provide search404 the tags (search terms) correctly when using this "Fulltext search" as a contextual filter rather than an exposed standard view filter field (which injects the ?search_api_fulltext={SEARCH_TERMS}). |
due to some limitations on how this search404 module works and how a view's search filters / contextual filter need to be (the exposed filter box would only have fields exposed that are actual filters - not contextual), it seems like it is not that great of a benefit when considering what it might take to get those working. It would be possible and could have some benefit to end-users to provide a link on the 404 page to perform a search for the part of the URL that was not found for example, user navigates to "localhost:8000/this is not a real page" which 404s and that page could render a link to perform that as a search like {Search "this is not a real page" in the repository}. |
The "Alternate hreflang" module (https://www.drupal.org/project/hreflang) does not automatically append the hreflang="en" to ALL links but it does for all that are made with core drupal functions that use the Link class. The links that are built in any custom text fields in a view are not addressed, so localizing these depending on the node's language property would require adding the Language field through the view/s and performing the logic to display the appropriate 2 letter codes. |
@elizoller, I am going to want to check the Google analytics for the repository sites so that we can get some metrics before and after making some of these changes. |
…ealtime_seo to show on collection edit page -- unknown how score is derived for #80
(status of these tasks are referring to the code in the
|
… access in twigs, added langcode to collection browse teaser, adjusted search item is part of to use langcode for #80
Regarding the friendly urls, I believe that we will want to have the canonical url for the content be the friendly urls rather than the items/{nid}. This should be easy to achieve, but I worry about index penalties. Additionally, the simple sitemap xml module would be pushing the new pattern for content that is based on token from the item / collection title. I suggest updating the configuration for these (admin/config/search/path) as follows:
And also adding patterns for the various taxonomy vocabulary:
though, after generating the friendly urls, there are some places where the link to content is using the previous pattern -- and that must be updated. Breadcrumbs are one place, but I also suspect that some twigs are building the link with /items/[node.id] or /collections/[node.id]. These would not be bad to better to update so that links contain possible keywords. Every place that we create links and all of our custom block code, we will need to confirm that the links still work. |
related to: #17
The text was updated successfully, but these errors were encountered: