In terms of size grouper can commonly be well over 3 feet in length and weigh upwards of 200lbs. Grouper are a saltwater fish that are commonly targeted in the southern regions of the United States and parts of South America.
In the autumn grouper tend to stay in deeper waters until the weather starts to cool down in the late season. When the weather cools they will move to waters ranging from 50 to 100 feet deep.
The colder winter months are a good time to catch them because they are closer to shore, however, feeding activity can be high during spring, which makes that a good opportunity to catch them as well. When the grouper are closer to shore, spinning rods are a good choice.
Stick with a heavy fast action rod around 6 to 7 feet in length. When you’re fishing deeper offshore waters, a conventional reel is the best choice.
These kinds of lures are versatile and can be fished in a wide variety of different settings. Though you can also use chunks of dead bait productively when targeting grouper.
Sardines are considered most effective by many anglers, but you can also use squid, pinkish, mullet, and other small fish. This is why you need to fish near coral ledges, rock piles, and other structure where they will likely be hiding in.
This is likely to get any grouper hiding in holes out and readily biting your baits. Female (upper) and male (lower) gag grouper.
Range Description : This species has a disjunct distribution in the western Atlantic from North Carolina south along the U.S., Bermuda, throughout the Gulf of Mexico except Cuba, and in southern Brazil from the State of Rio de Janeiro to Santa Catarina. Species Summary : The gag grouper is reef-associated species usually found offshore on rocky bottoms and occasionally inshore on rocky or grassy bottoms.
Overall, the species prefers habitats characterized by maximum structural complexity. Juveniles primarily inhabit seagrass beds but also oyster reefs and shallow estuaries structures.
The species primarily consumes fish, some crabs, shrimps and cephalopods. Females reach maturity between 3-6 years of age around fork lengths of 71 cm and sex change occurs between 75-111 cm TL and about 8 years.
The gag spawns in aggregations and can migrate, according to tagging studies, up to hundreds of kilometers to reach spawning sites PORTRAIT 3 which are located on shelf-edge reefs and rocky ridges next to drop-offs. Males remain near spawning sites in deep water year-round and in December and January females form pres pawning aggregations in shallower areas prior to migrating to the spawning aggregation sites.
Aggregations form in February through mid-April in the southeastern U.S. and from January to March on the Campeche Bank off Mexico. Fisheries : The species is heavily exploited by recreational and commercial fisheries throughout its range, including direct targeting of spawning aggregations.
Juveniles are often taken as by catch in the bait-shrimp fishery that operates in seagrass beds. Management/Conservation: As a result of overfishing, spawning aggregations have been greatly reduced or no longer form in some areas, and adult sex ratios have become highly female-biased in most areas.
Gag (Mycteroperca microbes) space use correlations with landscape structure and environmental conditions. Wide-spread genetic variability and the paradox of effective population size in the Gag, Mycteroperca microbes, along the West Florida Shelf.
From shelf to shelf: assessing historical and contemporary genetic differentiation and connectivity across the Gulf of Mexico in Gag, Mycteroperca microbes. Protection of grouper and red snapper spawning aggregations in shelf-edge marine reserves of the northeastern Gulf of Mexico: Demographics, movements, survival, and spillover effects.
Age, length and growth of Gag (Mycteroperca microbes) from the northeastern Gulf of Mexico: 1978-2012. Age and stock analysis using monolith shape in Gags from the southern Gulf of Mexico.
Agreement for modification of the ordinance about the seasonal closure of the capture of all grouper species in Mexican federal waters from the Gulf of Mexico and Caribbean Sea, including the littoral of Campeche, Yucatán and Quintana Roo. ), Dario Official de la Federation, Too DCC, No.
Relative survival of gags Mycteroperca microbes released within a recreational hookand-line fishery: Application of the Cox Regression Model to control for heterogeneity in a large-scale mark-recapture study. Sedan 33 Update Report Gulf of Mexico Gag Grouper.
Nassau grouper (Epimetheus stratus) migrate to specific sites during the winter full moons in order to reproduce in mass aggregations. Intense harvesting of spawning aggregations is the primary cause of the precipitous decline in populations throughout the Caribbean.
Ultimately, this information will allow us to assess the current and future impacts of protections afforded Cayman’s spawning aggregations. In a nutshell, we learned that 1) all Nassau grouper attending the spawning aggregation on the West End of Little Cayman are from Little Cayman (none are traveling from other countries, or even the other two Cayman Islands), 2) all reproductively-aged Nassau grouper on Little Cayman attend the aggregation each year (and often on multiple months each year), 3) larger (older) Nassau grouper arrive earlier and stay longer at the aggregation site, and 4) the fish move back and forth off the site during the aggregation period and will often circumnavigate the island during the day.
Videos showing the movement of each (fish) between the aggregation site and their home reef can be found here. Gag are a relatively common species of grouper in waters offshore of Louisiana, and are avidly pursued by both recreational and commercial fishermen.
This means that there are fewer males than females in any population, so fishing pressure can affect one sex more than the other. Gag have been well-researched on the South Atlantic coast, but very little work has been done on their biology in the Gulf of Mexico.
A total of 1,331 gags ranging in size from 0.7 – 48.9 inches long were captured. Larger fish were captured by recreational and commercial fishermen from waters 119 to 594 feet deep.
Smaller fish nearshore and in Tampa Bay were captured using seines, push nets, hooks, traps and speargun. All fish were weighed, measured and their age determined by counting the rings in their monoliths (ear bones).
Gag in the study increased in size as the water became deeper, out to 265 feet deep. Growth rates were most rapid the first year, with the average gag being almost 17 inches long by its first birthday.
My path to Grouper began in early 2019, as I was finishing a PhD in Classical Greek and Latin at Catholic University of America (how’s that for an origin story!). Rather than going through the hell of the tenure track job search, I decided I’d rather stay in the DC area, and that the best way to do that was to return to a technical career (my undergrad degree is in Comp Sci and I’d worked at IBM while in university).
In July 2019, I started working on the Identity Access & Management (IAM) team in the Division of IT at the University of Maryland, College Park. During my tenure here I’ve been the primary engineer responsible for our Grouper deployment; or, as I sometimes call myself in meetings on Friday afternoons, I’ve been UMD’s “Lord of Grouper.” This long retrospective is occasioned by a new role: in July 2020, I begin a new role as a Full Stack Engineer with the Arc Publishing (i.e. the Washington Post).
It’s being used successfully across North American and European Higher Ed as part of a wide IAM suite of open source tools developed by the Internet 2 initiative. The latter had been responsible for Grouper (and quite a number of other things); my arrival finally made it possible to hand it over to the IAM team, where it belonged.
Every time we change Grouper code we had to make sure the whole thing still worked as expected. Each component (UI, Is, and Daemon) had a separate bamboo plan for deployment.
All the more so because Grouper has an overlay system that sometimes requires an exact path to the overlaid file. This means you couldn’t in practice just copy one file, say Grouper -loader.properties from one env to the next without tweaking the overlay settings.
Grouper keeps full audit information for group memberships over time, but in this case those tables were creating DB inconsistency-related exceptions in production at least once every few weeks. Our IAM team does not have access to production database credentials, so we had no way to run the Grouper shell to do “maintenance” type things that couldn’t be done through the UI or Is.
This meant that it was easier, at first at least, to treat Grouper like one of our homegrown tomcat apps instead of something built elsewhere with its own needs. The production issues probably had something to do with running the Daemon without enough memory early on: occasionally it would die in the middle of something important, and this caused problems that persisted for months and required weeks worth of sleuthing to figure out.
The cloud infrastructure was ready for us, we mainly needed to figure out how to get Grouper to deploy well into that environment. I frankly wanted us to get out of the business of “building Grouper ”, so I reevaluated our Java customizations to see what needed to remain.
Turning off the built-in Shibboleth service provider in favor of CAS turning off SSL in the container in favor of termination at the load balancer moving Apache to listen on 8080 and tomcat to 8081 (since by default our app hosting setup expected to find containers listening on 8080) adding some Java artifacts (custom connectors, libraries for fetching credentials, health check code to make Grouper look more like one of our homegrown apps) adding our health check servlets into the web.xml and telling the CSRF guard to ignore them On the packaging and deployment front, we settled initially on a two-tiered setup (this ultimately proved more cumbersome than useful).
Generally this means there’s a versioned war built by maven that’s thrown into a container on a build. We would build one image, and then deploy it to three different ECS services (product stacks in UMD terminology).
Our Software Infrastructure team then built us a single bamboo plan to deploy all three components at once, which made updating each significantly easier. We’d generally notice that the daemon would stop provisioning groups to our LDAP servers, and then look in Splunk to find our logs littered with NullPointerExceptions related to Point-in-Time (PIT) records.
Because we didn’t fully grasp what Grouper was doing here, we would often delete the noisome row out of the temporary change log table. This would get things going again (or in the latter case, update a DB row manually to tell the provisioner to skip the entry in question).
After many copy backs from production into our dev environment, and much monkeying around in SQL Developer, I eventually ascertained that all our woes stemmed from our “Confluence-Administrators” group. We then were able to observe that whenever anyone was added to a confluence group in Grouper, our Daemon started spewing NullPointerExceptions.
To provide this, our Platform team stood up three EC2 instances (on for dev, QA, and prod respectively) with appropriate permissions to pull docker containers and fetch credentials. Early in the Grouper 2.4 development process I was consolidating our config from several places into one repository.
When I finally got it ready to turn on locally in Kubernetes, things seemed to be working fine. Until reports started coming in that people were disappearing out of groups in our production LDAP servers.
Happily the groups were fixed later that afternoon once production Grouper ran a proper LDAP full sync. The improved deployment process made pushing configuration changes and new base code much easier.
Our only outage occurred for about 10 minutes during one deploy that went awry: a CloudFormation stack update got stuck, and we intervened manually. A deployment, proper, merely changes a parameter on a CloudFormation stack, which prompts the EC2 instances in the cluster to grab the new container.
Each interested application then has its own SQS queue; the combination of SNS and SQS gives us retries and a dead letter queue effectively for “free.” (Happily our Platform team had this architecture already worked out; it was ready for us to pick up and run with). We control which groups trigger notifications by using an attribute, much like Grouper ’s PS PNG LDAP provisioner.
Soon we’ll also be using it to sync Google Groups; it’s our new default for any sort of custom provisioner. Our main complaint from users during this period was that provisioning to LDAP and Active Directory was sometimes getting stuck.
Initially I was perplexed because we’d fixed the PIT errors and there weren’t any exceptions in the logs. This prevented downstream provisioners (in this case for Active Directory or LDAP) from getting notified.
Jars for custom connectors (SNS, Oracle, Atlassian, Health checks) Apache configuration for mod_auth_CAS logic for pulling crews from our cred store Yet the auto-DDL update failed to run appropriately against our production database because two views that it was trying to remove did not exist.
It’s unclear to me just how that was the case (since the auto DDL update had worked fine in QA with data that had been refreshed from PROD recently). But once I found DDL it was trying to run, removed the two DROP VIEW bits and ran it manually, Grouper 2.5 came up appropriately.
Our oracle connector loads a properties file from within its jar, and then checks the file system for further overlays. Because the daemon now runs within tome instead of as a bare Java process, we had to change a line of code in our connector.
For the initial 2.5.23 deploy we reverted the daemon to run as a bare Java process. When they were initially moved into the cloud, the ECS task definition was only given 800mb of memory (this predated my arrival to UMD and wasn’t something I was aware of).
As we moved to Grouper 2.5, it became apparent that we needed to rethink our internal packaging and deployment strategy. Running maven twice provided an opportunity to introduce duplicate jars of different versions onto the class path, which caused some minor headaches in production.
To fix this, we’ve consolidated all the Grouper container configuration into a single repo. The Java dependencies are listed in this project’s pom.xml and pulled in via man package during the build process.
Collocated Database We’ve a variety of plans for what comes next for our Grouper deployment. Even though College Park, MD is not far from Amazon’s us-east-1 in Northern Virginia, we still incur an extra couple of milliseconds of latency between the on-premise data centers and our app servers in AWS.
This ends up making a pretty big difference for an application like Grouper that really taxes the database. For a variety of reasons (organizational and technical), we probably won’t be able to migrate to using something like Aurora in production in the immediate future.
After banging my head against CloudFormation for a day or two, I managed to stand up an Aurora Postgres cluster with a single RDS instance. Finally, our Platform Engineering Team, especially Eric Sturdiest, has been instrumental in making Grouper a low-risk application to deploy and run.
Chris Hyper, While Patel, Carey Black, and many others have provided helpful advice on how to best deploy this powerful and complex tool.